[jira] [Commented] (HADOOP-10525) Remove DRFA.MaxBackupIndex config from log4j.properties

2014-04-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13975425#comment-13975425
 ] 

Hadoop QA commented on HADOOP-10525:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12641026/HADOOP-10525.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3820//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3820//console

This message is automatically generated.

 Remove DRFA.MaxBackupIndex config from log4j.properties
 ---

 Key: HADOOP-10525
 URL: https://issues.apache.org/jira/browse/HADOOP-10525
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-10525.patch


 From [hadoop-user mailing 
 list|http://mail-archives.apache.org/mod_mbox/hadoop-user/201404.mbox/%3C534FACD3.8040907%40corp.badoo.com%3E].
 {code}
 # 30-day backup
 # log4j.appender.DRFA.MaxBackupIndex=30
 {code}
 In {{log4j.properties}}, the above lines should be removed because 
 DailyRollingFileAppender(DRFA) doesn't support MaxBackupIndex config.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10525) Remove DRFA.MaxBackupIndex config from log4j.properties

2014-04-21 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13975426#comment-13975426
 ] 

Akira AJISAKA commented on HADOOP-10525:


The patch is just to remove the comments, so new tests are not needed.

 Remove DRFA.MaxBackupIndex config from log4j.properties
 ---

 Key: HADOOP-10525
 URL: https://issues.apache.org/jira/browse/HADOOP-10525
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-10525.patch


 From [hadoop-user mailing 
 list|http://mail-archives.apache.org/mod_mbox/hadoop-user/201404.mbox/%3C534FACD3.8040907%40corp.badoo.com%3E].
 {code}
 # 30-day backup
 # log4j.appender.DRFA.MaxBackupIndex=30
 {code}
 In {{log4j.properties}}, the above lines should be removed because 
 DailyRollingFileAppender(DRFA) doesn't support MaxBackupIndex config.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-7723) Automatically generate good Release Notes

2014-04-21 Thread Mit Desai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mit Desai updated HADOOP-7723:
--

Target Version/s: 1.3.0  (was: 0.23.0, 1.3.0)

 Automatically generate good Release Notes
 -

 Key: HADOOP-7723
 URL: https://issues.apache.org/jira/browse/HADOOP-7723
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 0.20.204.0, 0.23.0
Reporter: Matt Foley
Assignee: Matt Foley

 In branch-0.20-security, there is a tool src/docs/relnotes.py, that 
 automatically generates Release Notes.  Fix deficiencies and port it up to 
 trunk.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-7723) Automatically generate good Release Notes

2014-04-21 Thread Mit Desai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13975649#comment-13975649
 ] 

Mit Desai commented on HADOOP-7723:
---

Haven't heard back since a while. As 0.23 is going into maintenance mode, 
removing 0.23 from the target versions.

 Automatically generate good Release Notes
 -

 Key: HADOOP-7723
 URL: https://issues.apache.org/jira/browse/HADOOP-7723
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 0.20.204.0, 0.23.0
Reporter: Matt Foley
Assignee: Matt Foley

 In branch-0.20-security, there is a tool src/docs/relnotes.py, that 
 automatically generates Release Notes.  Fix deficiencies and port it up to 
 trunk.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-7661) FileSystem.getCanonicalServiceName throws NPE for any file system uri that doesn't have an authority.

2014-04-21 Thread Mit Desai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mit Desai updated HADOOP-7661:
--

Target Version/s: 0.20.205.0, 3.0.0  (was: 0.20.205.0, 0.23.0)

 FileSystem.getCanonicalServiceName throws NPE for any file system uri that 
 doesn't have an authority.
 -

 Key: HADOOP-7661
 URL: https://issues.apache.org/jira/browse/HADOOP-7661
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.20.205.0
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey
 Attachments: HADOOP-7661.20s.1.patch, HADOOP-7661.20s.2.patch, 
 HADOOP-7661.20s.3.patch


 FileSystem.getCanonicalServiceName throws NPE for any file system uri that 
 doesn't have an authority. 
 
 java.lang.NullPointerException
 at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:138)
 at 
 org.apache.hadoop.security.SecurityUtil.buildDTServiceName(SecurityUtil.java:261)
 at 
 org.apache.hadoop.fs.FileSystem.getCanonicalServiceName(FileSystem.java:174)
 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-7661) FileSystem.getCanonicalServiceName throws NPE for any file system uri that doesn't have an authority.

2014-04-21 Thread Mit Desai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mit Desai updated HADOOP-7661:
--

Fix Version/s: (was: 0.20.205.0)

 FileSystem.getCanonicalServiceName throws NPE for any file system uri that 
 doesn't have an authority.
 -

 Key: HADOOP-7661
 URL: https://issues.apache.org/jira/browse/HADOOP-7661
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.20.205.0
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey
 Attachments: HADOOP-7661.20s.1.patch, HADOOP-7661.20s.2.patch, 
 HADOOP-7661.20s.3.patch


 FileSystem.getCanonicalServiceName throws NPE for any file system uri that 
 doesn't have an authority. 
 
 java.lang.NullPointerException
 at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:138)
 at 
 org.apache.hadoop.security.SecurityUtil.buildDTServiceName(SecurityUtil.java:261)
 at 
 org.apache.hadoop.fs.FileSystem.getCanonicalServiceName(FileSystem.java:174)
 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-7661) FileSystem.getCanonicalServiceName throws NPE for any file system uri that doesn't have an authority.

2014-04-21 Thread Mit Desai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13975661#comment-13975661
 ] 

Mit Desai commented on HADOOP-7661:
---

I did not hear anything since last couple of days. 0.23 is going into 
maintenance mode. Re-targeting this to 3.0.0. Also removing the fix version. It 
should be modified once the patch is committed.

 FileSystem.getCanonicalServiceName throws NPE for any file system uri that 
 doesn't have an authority.
 -

 Key: HADOOP-7661
 URL: https://issues.apache.org/jira/browse/HADOOP-7661
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.20.205.0
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey
 Attachments: HADOOP-7661.20s.1.patch, HADOOP-7661.20s.2.patch, 
 HADOOP-7661.20s.3.patch


 FileSystem.getCanonicalServiceName throws NPE for any file system uri that 
 doesn't have an authority. 
 
 java.lang.NullPointerException
 at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:138)
 at 
 org.apache.hadoop.security.SecurityUtil.buildDTServiceName(SecurityUtil.java:261)
 at 
 org.apache.hadoop.fs.FileSystem.getCanonicalServiceName(FileSystem.java:174)
 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10467) Enable proxyuser specification to support list of users in addition to list of groups.

2014-04-21 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13975719#comment-13975719
 ] 

Daryn Sharp commented on HADOOP-10467:
--

I understand now.  Personally I'd find it more intuitive to have a separate key 
for a proxy user's allowed users.  It avoids the redundancy of two ways to 
configure groups.  More importantly I've never liked the queue acl config.  
Requiring a blank space at the beginning of a config to signify no users is not 
intuitive to the ordinary user...  Unless anyone else has a strong opinion, I'd 
go with two keys.

 Enable proxyuser specification to support list of users in addition to list 
 of groups.
 --

 Key: HADOOP-10467
 URL: https://issues.apache.org/jira/browse/HADOOP-10467
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Reporter: Benoy Antony
Assignee: Benoy Antony
 Attachments: HADOOP-10467.patch, HADOOP-10467.patch, 
 HADOOP-10467.patch, HADOOP-10467.patch


 Today , the proxy user specification supports only list of groups. In some 
 cases, it is useful to specify the list of users in addition to list of 
 groups. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10510) TestSymlinkLocalFSFileContext tests are failing

2014-04-21 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13975743#comment-13975743
 ] 

Colin Patrick McCabe commented on HADOOP-10510:
---

I don't think using gist here is a good practice, since if that site ever goes 
down (or removes your particular entry), we will be left wondering what this 
bug is about.  Can you add the relevant information to JIRA?  Thanks.

 TestSymlinkLocalFSFileContext tests are failing
 ---

 Key: HADOOP-10510
 URL: https://issues.apache.org/jira/browse/HADOOP-10510
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
 Environment: Linux
Reporter: Daniel Darabos

 Test results:
 https://gist.github.com/oza/9965197
 This was mentioned on hadoop-common-dev:
 http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201404.mbox/%3CCAAD07OKRSmx9VSjmfk1YxyBmnFM8mwZSp%3DizP8yKKwoXYvn3Qg%40mail.gmail.com%3E
 Can you suggest a workaround in the meantime? I'd like to send a pull request 
 for an unrelated bug, but these failures mean I cannot build hadoop-common to 
 test my fix. Thanks.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10448) Support pluggable mechanism to specify proxy user settings

2014-04-21 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13975748#comment-13975748
 ] 

Daryn Sharp commented on HADOOP-10448:
--

A few initial observations:  Less synchronization is always good, removing all 
synchronization will cause race conditions accessing the non-thread safe data 
structures during a refresh.  Does it make sense for get*ConfKey methods to be 
part of the api?  That seems like an implementation detail of a conf based 
provider that is inapplicable to other abstract providers.

While I like pluggable interfaces, I have concerns about this use case.  The 
proxy checks must be very performant to avoid stalling the jetty threads or 
limited number of ipc readers.  Stalling the readers is pretty serious!  I'm 
just curious what alternate implementation you intend to use?  Per the conf 
manageability concern, we use xml includes to break the proxy user config into 
a separate file.

 Support pluggable mechanism to specify proxy user settings
 --

 Key: HADOOP-10448
 URL: https://issues.apache.org/jira/browse/HADOOP-10448
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 2.3.0
Reporter: Benoy Antony
Assignee: Benoy Antony
 Attachments: HADOOP-10448.patch, HADOOP-10448.patch, 
 HADOOP-10448.patch, HADOOP-10448.patch, HADOOP-10448.patch


 We have a requirement to support large number of superusers. (users who 
 impersonate as another user) 
 (http://hadoop.apache.org/docs/r1.2.1/Secure_Impersonation.html) 
 Currently each  superuser needs to be defined in the core-site.xml via 
 proxyuser settings. This will be cumbersome when there are 1000 entries.
 It seems useful to have a pluggable mechanism to specify  proxy user settings 
 with the current approach as the default. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10510) TestSymlinkLocalFSFileContext tests are failing

2014-04-21 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13975752#comment-13975752
 ] 

Tsuyoshi OZAWA commented on HADOOP-10510:
-

Hi Colin, thank you for pointing!  The following log is a result of the test 
failure:

{quote}
Results :
 
Failed tests:
  
TestSymlinkLocalFSFileSystemTestSymlinkLocalFS.testStatDanglingLink:115-SymlinkBaseTest.testStatDanglingLink:301
 null
  TestSymlinkLocalFSFileSystemSymlinkBaseTest.testStatLinkToFile:244 null
  TestSymlinkLocalFSFileSystemSymlinkBaseTest.testStatLinkToDir:286 null
  
TestSymlinkLocalFSFileSystemSymlinkBaseTest.testCreateLinkUsingRelPaths:447-SymlinkBaseTest.checkLink:381
 null
  
TestSymlinkLocalFSFileSystemSymlinkBaseTest.testCreateLinkUsingAbsPaths:472-SymlinkBaseTest.checkLink:381
 null
  
TestSymlinkLocalFSFileSystemSymlinkBaseTest.testCreateLinkUsingFullyQualPaths:503-SymlinkBaseTest.checkLink:381
 null
  TestSymlinkLocalFSFileSystemSymlinkBaseTest.testCreateLinkToDirectory:627 
null
  TestSymlinkLocalFSFileSystemSymlinkBaseTest.testCreateLinkViaLink:679 null
  TestSymlinkLocalFSFileSystemSymlinkBaseTest.testRenameSymlinkViaSymlink:897 
null
  
TestSymlinkLocalFSFileSystemSymlinkBaseTest.testRenameSymlinkNonExistantDest:1036
 null
  
TestSymlinkLocalFSFileSystemSymlinkBaseTest.testRenameSymlinkToExistingFile:1063
 null
  TestSymlinkLocalFSFileSystemSymlinkBaseTest.testRenameSymlink:1134 null
  TestSymlinkLocalFSFileContextSymlinkBaseTest.testStatLinkToFile:244 null
  TestSymlinkLocalFSFileContextSymlinkBaseTest.testStatLinkToDir:286 null
  
TestSymlinkLocalFSFileContextSymlinkBaseTest.testCreateLinkUsingRelPaths:447-SymlinkBaseTest.checkLink:381
 null
  
TestSymlinkLocalFSFileContextSymlinkBaseTest.testCreateLinkUsingAbsPaths:472-SymlinkBaseTest.checkLink:381
 null
  
TestSymlinkLocalFSFileContextSymlinkBaseTest.testCreateLinkUsingFullyQualPaths:503-SymlinkBaseTest.checkLink:381
 null
  TestSymlinkLocalFSFileContextSymlinkBaseTest.testCreateLinkToDirectory:627 
null
  TestSymlinkLocalFSFileContextSymlinkBaseTest.testCreateLinkViaLink:679 null
  TestSymlinkLocalFSFileContextSymlinkBaseTest.testRenameSymlinkViaSymlink:897 
null
  
TestSymlinkLocalFSFileContextSymlinkBaseTest.testRenameSymlinkNonExistantDest:1036
 null
  
TestSymlinkLocalFSFileContextSymlinkBaseTest.testRenameSymlinkToExistingFile:1063
 null
  TestSymlinkLocalFSFileContextSymlinkBaseTest.testRenameSymlink:1134 null
  
TestSymlinkLocalFSFileContextTestSymlinkLocalFS.testStatDanglingLink:115-SymlinkBaseTest.testStatDanglingLink:301
 null
 
Tests in error:
  TestSymlinkLocalFSFileSystemTestSymlinkLocalFS.testDanglingLink:163 ? IO 
Path...
  
TestSymlinkLocalFSFileSystemTestSymlinkLocalFS.testGetLinkStatusPartQualTarget:201
 ? IO
  TestSymlinkLocalFSFileSystemSymlinkBaseTest.testCreateLinkToDotDotPrefix:822 
? IO
  TestSymlinkLocalFSFileContextTestSymlinkLocalFS.testDanglingLink:163 ? IO 
Pat...
  
TestSymlinkLocalFSFileContextTestSymlinkLocalFS.testGetLinkStatusPartQualTarget:201
 ? IO
  
TestSymlinkLocalFSFileContextSymlinkBaseTest.testCreateLinkToDotDotPrefix:822 
? IO
{quote}

 TestSymlinkLocalFSFileContext tests are failing
 ---

 Key: HADOOP-10510
 URL: https://issues.apache.org/jira/browse/HADOOP-10510
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
 Environment: Linux
Reporter: Daniel Darabos

 Test results:
 https://gist.github.com/oza/9965197
 This was mentioned on hadoop-common-dev:
 http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201404.mbox/%3CCAAD07OKRSmx9VSjmfk1YxyBmnFM8mwZSp%3DizP8yKKwoXYvn3Qg%40mail.gmail.com%3E
 Can you suggest a workaround in the meantime? I'd like to send a pull request 
 for an unrelated bug, but these failures mean I cannot build hadoop-common to 
 test my fix. Thanks.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10522) JniBasedUnixGroupMapping mishandles errors

2014-04-21 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-10522:


Target Version/s: 2.4.1  (was: 2.5.0)

 JniBasedUnixGroupMapping mishandles errors
 --

 Key: HADOOP-10522
 URL: https://issues.apache.org/jira/browse/HADOOP-10522
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Critical
 Attachments: hadoop-10522.patch


 The mishandling of errors in the jni user-to-groups mapping modules can cause 
 segmentation faults in subsequent calls.  Here are the bugs:
 1) If {{hadoop_user_info_fetch()}} returns an error code that is not ENOENT, 
 the error may not be handled at all.  This bug was found by [~cnauroth].
 2)  In {{hadoop_user_info_fetch()}} and {{hadoop_group_info_fetch()}}, the 
 global {{errno}} is directly used. This is not thread-safe and could be the 
 cause of some failures that disappeared after enabling the big lookup lock.
 3) In the above methods, there is no limit on retries.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9919) Rewrite hadoop-metrics2.properties

2014-04-21 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13975775#comment-13975775
 ] 

Suresh Srinivas commented on HADOOP-9919:
-

+1 for this change. [~ajisakaa], I will commit this shortly.

 Rewrite hadoop-metrics2.properties
 --

 Key: HADOOP-9919
 URL: https://issues.apache.org/jira/browse/HADOOP-9919
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.1.0-beta
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Attachments: HADOOP-9919.2.patch, HADOOP-9919.3.patch, 
 HADOOP-9919.4.patch, HADOOP-9919.patch


 The config for JobTracker and TaskTracker (comment outed) still exists in 
 hadoop-metrics2.properties as follows:
 {code}
 #jobtracker.sink.file_jvm.context=jvm
 #jobtracker.sink.file_jvm.filename=jobtracker-jvm-metrics.out
 #jobtracker.sink.file_mapred.context=mapred
 #jobtracker.sink.file_mapred.filename=jobtracker-mapred-metrics.out
 #tasktracker.sink.file.filename=tasktracker-metrics.out
 {code}
 These lines should be removed and a config for NodeManager should be added 
 instead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9919) Update hadoop-metrics2.properties to Yarn

2014-04-21 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-9919:


Summary: Update hadoop-metrics2.properties to Yarn  (was: Rewrite 
hadoop-metrics2.properties)

 Update hadoop-metrics2.properties to Yarn
 -

 Key: HADOOP-9919
 URL: https://issues.apache.org/jira/browse/HADOOP-9919
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.1.0-beta
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Attachments: HADOOP-9919.2.patch, HADOOP-9919.3.patch, 
 HADOOP-9919.4.patch, HADOOP-9919.patch


 The config for JobTracker and TaskTracker (comment outed) still exists in 
 hadoop-metrics2.properties as follows:
 {code}
 #jobtracker.sink.file_jvm.context=jvm
 #jobtracker.sink.file_jvm.filename=jobtracker-jvm-metrics.out
 #jobtracker.sink.file_mapred.context=mapred
 #jobtracker.sink.file_mapred.filename=jobtracker-mapred-metrics.out
 #tasktracker.sink.file.filename=tasktracker-metrics.out
 {code}
 These lines should be removed and a config for NodeManager should be added 
 instead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9919) Update hadoop-metrics2.properties examples to Yarn

2014-04-21 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-9919:


Summary: Update hadoop-metrics2.properties examples to Yarn  (was: Update 
hadoop-metrics2.properties to Yarn)

 Update hadoop-metrics2.properties examples to Yarn
 --

 Key: HADOOP-9919
 URL: https://issues.apache.org/jira/browse/HADOOP-9919
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.1.0-beta
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Attachments: HADOOP-9919.2.patch, HADOOP-9919.3.patch, 
 HADOOP-9919.4.patch, HADOOP-9919.patch


 The config for JobTracker and TaskTracker (comment outed) still exists in 
 hadoop-metrics2.properties as follows:
 {code}
 #jobtracker.sink.file_jvm.context=jvm
 #jobtracker.sink.file_jvm.filename=jobtracker-jvm-metrics.out
 #jobtracker.sink.file_mapred.context=mapred
 #jobtracker.sink.file_mapred.filename=jobtracker-mapred-metrics.out
 #tasktracker.sink.file.filename=tasktracker-metrics.out
 {code}
 These lines should be removed and a config for NodeManager should be added 
 instead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10510) TestSymlinkLocalFSFileContext tests are failing

2014-04-21 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13975785#comment-13975785
 ] 

Colin Patrick McCabe commented on HADOOP-10510:
---

That's a very odd error message.  Just null?  Have you tried a clean rebuild, 
or running the tests on their own to get a more informative message?

 TestSymlinkLocalFSFileContext tests are failing
 ---

 Key: HADOOP-10510
 URL: https://issues.apache.org/jira/browse/HADOOP-10510
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
 Environment: Linux
Reporter: Daniel Darabos

 Test results:
 https://gist.github.com/oza/9965197
 This was mentioned on hadoop-common-dev:
 http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201404.mbox/%3CCAAD07OKRSmx9VSjmfk1YxyBmnFM8mwZSp%3DizP8yKKwoXYvn3Qg%40mail.gmail.com%3E
 Can you suggest a workaround in the meantime? I'd like to send a pull request 
 for an unrelated bug, but these failures mean I cannot build hadoop-common to 
 test my fix. Thanks.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10522) JniBasedUnixGroupMapping mishandles errors

2014-04-21 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13975795#comment-13975795
 ] 

Daryn Sharp commented on HADOOP-10522:
--

+1  Looks good to me.  Since the *_r functions return errno, I think that's the 
safest value to use.

 JniBasedUnixGroupMapping mishandles errors
 --

 Key: HADOOP-10522
 URL: https://issues.apache.org/jira/browse/HADOOP-10522
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Critical
 Attachments: hadoop-10522.patch


 The mishandling of errors in the jni user-to-groups mapping modules can cause 
 segmentation faults in subsequent calls.  Here are the bugs:
 1) If {{hadoop_user_info_fetch()}} returns an error code that is not ENOENT, 
 the error may not be handled at all.  This bug was found by [~cnauroth].
 2)  In {{hadoop_user_info_fetch()}} and {{hadoop_group_info_fetch()}}, the 
 global {{errno}} is directly used. This is not thread-safe and could be the 
 cause of some failures that disappeared after enabling the big lookup lock.
 3) In the above methods, there is no limit on retries.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10510) TestSymlinkLocalFSFileContext tests are failing

2014-04-21 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated HADOOP-10510:


Attachment: TestSymlinkLocalFSFileContext.txt
TestSymlinkLocalFSFileContext-output.txt

 TestSymlinkLocalFSFileContext tests are failing
 ---

 Key: HADOOP-10510
 URL: https://issues.apache.org/jira/browse/HADOOP-10510
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
 Environment: Linux
Reporter: Daniel Darabos
 Attachments: TestSymlinkLocalFSFileContext-output.txt, 
 TestSymlinkLocalFSFileContext.txt


 Test results:
 https://gist.github.com/oza/9965197
 This was mentioned on hadoop-common-dev:
 http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201404.mbox/%3CCAAD07OKRSmx9VSjmfk1YxyBmnFM8mwZSp%3DizP8yKKwoXYvn3Qg%40mail.gmail.com%3E
 Can you suggest a workaround in the meantime? I'd like to send a pull request 
 for an unrelated bug, but these failures mean I cannot build hadoop-common to 
 test my fix. Thanks.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10510) TestSymlinkLocalFSFileContext tests are failing

2014-04-21 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13975806#comment-13975806
 ] 

Tsuyoshi OZAWA commented on HADOOP-10510:
-

Attached full log of result of mvn clean test 
-Dtest=TestSymlinkLocalFSFileContext.

 TestSymlinkLocalFSFileContext tests are failing
 ---

 Key: HADOOP-10510
 URL: https://issues.apache.org/jira/browse/HADOOP-10510
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
 Environment: Linux
Reporter: Daniel Darabos
 Attachments: TestSymlinkLocalFSFileContext-output.txt, 
 TestSymlinkLocalFSFileContext.txt


 Test results:
 https://gist.github.com/oza/9965197
 This was mentioned on hadoop-common-dev:
 http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201404.mbox/%3CCAAD07OKRSmx9VSjmfk1YxyBmnFM8mwZSp%3DizP8yKKwoXYvn3Qg%40mail.gmail.com%3E
 Can you suggest a workaround in the meantime? I'd like to send a pull request 
 for an unrelated bug, but these failures mean I cannot build hadoop-common to 
 test my fix. Thanks.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9919) Update hadoop-metrics2.properties examples to Yarn

2014-04-21 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-9919:


   Resolution: Fixed
Fix Version/s: 2.5.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks [~ajisakaa] for the 
contribution.

 Update hadoop-metrics2.properties examples to Yarn
 --

 Key: HADOOP-9919
 URL: https://issues.apache.org/jira/browse/HADOOP-9919
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.1.0-beta
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Fix For: 2.5.0

 Attachments: HADOOP-9919.2.patch, HADOOP-9919.3.patch, 
 HADOOP-9919.4.patch, HADOOP-9919.patch


 The config for JobTracker and TaskTracker (comment outed) still exists in 
 hadoop-metrics2.properties as follows:
 {code}
 #jobtracker.sink.file_jvm.context=jvm
 #jobtracker.sink.file_jvm.filename=jobtracker-jvm-metrics.out
 #jobtracker.sink.file_mapred.context=mapred
 #jobtracker.sink.file_mapred.filename=jobtracker-mapred-metrics.out
 #tasktracker.sink.file.filename=tasktracker-metrics.out
 {code}
 These lines should be removed and a config for NodeManager should be added 
 instead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10522) JniBasedUnixGroupMapping mishandles errors

2014-04-21 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13975831#comment-13975831
 ] 

Colin Patrick McCabe commented on HADOOP-10522:
---

{code}
+// The following call returns errno. Reading the global errno wihtout
+// locking is not thread-safe.
{code}

As [~cnauroth] mentioned, this is wrong.  Please remove

{code}
+pwd = NULL;
+ret = getpwnam_r(username, uinfo-pwd, uinfo-buf,
  uinfo-buf_sz, pwd);
-} while ((!pwd)  (errno == EINTR));
{code}

Unfortunately, this is wrong too. :(

{{getgrgrid_r}} does not set {{errno}} (or at least is not documented to do so 
by POSIX).  Instead, it returns the error number directly.  Here's the man page 
on my system:

{code}
   On  success,  getgrnam_r() and getgrgid_r() return zero, and set *result 
to grp.  If no matching group record was found, these functions return 0 and 
store NULL in *result.  In case of error, an error number is returned, and NULL 
is
   stored in *result.
{code}

Notice that {{errno}} is not mentioned.

{code}
   ret = hadoop_user_info_fetch(uinfo, username);
-  if (ret == ENOENT) {
-jgroups = (*env)-NewObjectArray(env, 0, g_string_clazz, NULL);
+  if (ret) {
+if (ret == ENOENT) {
+  jgroups = (*env)-NewObjectArray(env, 0, g_string_clazz, NULL);
+} else { // handle other errors
+  char buf[128];
+  snprintf(buf, sizeof(buf), getgrouplist: error looking up user. %d 
(%s),
+   ret, terror(ret));
+  THROW(env, java/lang/RuntimeException, buf);
+}
{code}

Try {{(*env)-Throw(env, newRuntimeException(getgrouplist: error looking up 
user. %d (%s), ret, terror(ret)))}} here instead.

{code}
+  for (i = 0, ret = 0; i  MAX_USER_LOOKUP_TRIES; i++) {
{code}

This is wrong.  Just because we get {{EINTR}} 5 times doesn't mean we should 
fail.  Maybe we're just handling a lot of signals (the JVM sends signals 
internally).  Also, why are we increasing the buffer size when we get 
{{EINTR}}?  We should only increase the buffer size when we get {{ERANGE}}.  I 
think the better way to do this would be to keep the old loop, but break out if 
we got an {{ERANGE}} and the buffer size was above some fixed amount.  However, 
this still seems strange to me.  Clearly the underlying library is buggy, if it 
keeps telling us {{ERANGE}} forever.  Is there a particular bug we are trying 
to work around in this patch?

 JniBasedUnixGroupMapping mishandles errors
 --

 Key: HADOOP-10522
 URL: https://issues.apache.org/jira/browse/HADOOP-10522
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Critical
 Attachments: hadoop-10522.patch


 The mishandling of errors in the jni user-to-groups mapping modules can cause 
 segmentation faults in subsequent calls.  Here are the bugs:
 1) If {{hadoop_user_info_fetch()}} returns an error code that is not ENOENT, 
 the error may not be handled at all.  This bug was found by [~cnauroth].
 2)  In {{hadoop_user_info_fetch()}} and {{hadoop_group_info_fetch()}}, the 
 global {{errno}} is directly used. This is not thread-safe and could be the 
 cause of some failures that disappeared after enabling the big lookup lock.
 3) In the above methods, there is no limit on retries.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10522) JniBasedUnixGroupMapping mishandles errors

2014-04-21 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-10522:


   Resolution: Fixed
Fix Version/s: 2.4.1
   3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks for the reviews, Chris and Daryn. I've committed this to trunk, branch-2 
and branch-2.4.

 JniBasedUnixGroupMapping mishandles errors
 --

 Key: HADOOP-10522
 URL: https://issues.apache.org/jira/browse/HADOOP-10522
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Critical
 Fix For: 3.0.0, 2.4.1

 Attachments: hadoop-10522.patch


 The mishandling of errors in the jni user-to-groups mapping modules can cause 
 segmentation faults in subsequent calls.  Here are the bugs:
 1) If {{hadoop_user_info_fetch()}} returns an error code that is not ENOENT, 
 the error may not be handled at all.  This bug was found by [~cnauroth].
 2)  In {{hadoop_user_info_fetch()}} and {{hadoop_group_info_fetch()}}, the 
 global {{errno}} is directly used. This is not thread-safe and could be the 
 cause of some failures that disappeared after enabling the big lookup lock.
 3) In the above methods, there is no limit on retries.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10522) JniBasedUnixGroupMapping mishandles errors

2014-04-21 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13975840#comment-13975840
 ] 

Colin Patrick McCabe commented on HADOOP-10522:
---

Let's hold off on committing this until we fix these issues.

 JniBasedUnixGroupMapping mishandles errors
 --

 Key: HADOOP-10522
 URL: https://issues.apache.org/jira/browse/HADOOP-10522
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Critical
 Attachments: hadoop-10522.patch


 The mishandling of errors in the jni user-to-groups mapping modules can cause 
 segmentation faults in subsequent calls.  Here are the bugs:
 1) If {{hadoop_user_info_fetch()}} returns an error code that is not ENOENT, 
 the error may not be handled at all.  This bug was found by [~cnauroth].
 2)  In {{hadoop_user_info_fetch()}} and {{hadoop_group_info_fetch()}}, the 
 global {{errno}} is directly used. This is not thread-safe and could be the 
 cause of some failures that disappeared after enabling the big lookup lock.
 3) In the above methods, there is no limit on retries.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10522) JniBasedUnixGroupMapping mishandles errors

2014-04-21 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13975857#comment-13975857
 ] 

Colin Patrick McCabe commented on HADOOP-10522:
---

[~kihwal], it seems like you missed my review.  You probably didn't hit refresh 
soon enough.  Do you mind if I back this out so we can fix some of the issues 
with this patch?

 JniBasedUnixGroupMapping mishandles errors
 --

 Key: HADOOP-10522
 URL: https://issues.apache.org/jira/browse/HADOOP-10522
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Critical
 Fix For: 3.0.0, 2.4.1

 Attachments: hadoop-10522.patch


 The mishandling of errors in the jni user-to-groups mapping modules can cause 
 segmentation faults in subsequent calls.  Here are the bugs:
 1) If {{hadoop_user_info_fetch()}} returns an error code that is not ENOENT, 
 the error may not be handled at all.  This bug was found by [~cnauroth].
 2)  In {{hadoop_user_info_fetch()}} and {{hadoop_group_info_fetch()}}, the 
 global {{errno}} is directly used. This is not thread-safe and could be the 
 cause of some failures that disappeared after enabling the big lookup lock.
 3) In the above methods, there is no limit on retries.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10150) Hadoop cryptographic file system

2014-04-21 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13975873#comment-13975873
 ] 

Andrew Purtell commented on HADOOP-10150:
-

bq.  there's one more layer to consider: virtualized hadoop clusters.

An interesting paper on this topic is http://eprint.iacr.org/2014/248.pdf, 
which discusses side channel attacks on AES on Xen and VMWare platforms. JCE 
ciphers were not included in the analysis but should be suspect until proven 
otherwise. JRE = 8 will accelerate AES using AES-NI instructions. Since AES-NI 
performs each full round of AES in a hardware register all known side channel 
attacks are prevented. 

 Hadoop cryptographic file system
 

 Key: HADOOP-10150
 URL: https://issues.apache.org/jira/browse/HADOOP-10150
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Affects Versions: 3.0.0
Reporter: Yi Liu
Assignee: Yi Liu
  Labels: rhino
 Fix For: 3.0.0

 Attachments: CryptographicFileSystem.patch, HADOOP cryptographic file 
 system-V2.docx, HADOOP cryptographic file system.pdf, 
 HDFSDataAtRestEncryptionAlternatives.pdf, 
 HDFSDataatRestEncryptionAttackVectors.pdf, 
 HDFSDataatRestEncryptionProposal.pdf, cfs.patch, extended information based 
 on INode feature.patch


 There is an increasing need for securing data when Hadoop customers use 
 various upper layer applications, such as Map-Reduce, Hive, Pig, HBase and so 
 on.
 HADOOP CFS (HADOOP Cryptographic File System) is used to secure data, based 
 on HADOOP “FilterFileSystem” decorating DFS or other file systems, and 
 transparent to upper layer applications. It’s configurable, scalable and fast.
 High level requirements:
 1.Transparent to and no modification required for upper layer 
 applications.
 2.“Seek”, “PositionedReadable” are supported for input stream of CFS if 
 the wrapped file system supports them.
 3.Very high performance for encryption and decryption, they will not 
 become bottleneck.
 4.Can decorate HDFS and all other file systems in Hadoop, and will not 
 modify existing structure of file system, such as namenode and datanode 
 structure if the wrapped file system is HDFS.
 5.Admin can configure encryption policies, such as which directory will 
 be encrypted.
 6.A robust key management framework.
 7.Support Pread and append operations if the wrapped file system supports 
 them.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10503) Move junit up to v 4.11

2014-04-21 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-10503:
---

Attachment: HADOOP-10503.dummy.patch

I'm attaching a dummy patch that modifies all the same files as in my prior 
patch, but only by adding a one-line comment change.  There is no intention to 
commit this.  This is just intended to test my theory that Jenkins is timing 
out due to some bug in our automation for patches spanning multiple 
sub-modules, rather than a problem with the JUnit upgrade patch itself.

 Move junit up to v 4.11
 ---

 Key: HADOOP-10503
 URL: https://issues.apache.org/jira/browse/HADOOP-10503
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Affects Versions: 2.4.0
Reporter: Steve Loughran
Assignee: Chris Nauroth
Priority: Minor
 Attachments: HADOOP-10503.1.patch, HADOOP-10503.2.patch, 
 HADOOP-10503.3.patch, HADOOP-10503.4.patch, HADOOP-10503.dummy.patch


 JUnit 4.11 has been out for a while; other projects are happy with it, so 
 update it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Moved] (HADOOP-10526) Chance for Stream leakage in CompressorStream

2014-04-21 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee moved HDFS-2630 to HADOOP-10526:
---

 Target Version/s: 2.5.0  (was: 3.0.0, 2.5.0)
Affects Version/s: (was: 0.23.0)
   0.23.0
  Key: HADOOP-10526  (was: HDFS-2630)
  Project: Hadoop Common  (was: Hadoop HDFS)

 Chance for Stream leakage in CompressorStream
 -

 Key: HADOOP-10526
 URL: https://issues.apache.org/jira/browse/HADOOP-10526
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.0
Reporter: SreeHari
Assignee: Rushabh S Shah
Priority: Minor
 Attachments: HDFS-2630-v2.patch, HDFS-2630.patch


 In CompressorStream.close , finish() can throw IOException . But out will not 
 be closed in that situation since it is not in finally 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10526) Chance for Stream leakage in CompressorStream

2014-04-21 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13975919#comment-13975919
 ] 

Kihwal Lee commented on HADOOP-10526:
-

+1 The patch looks good.

 Chance for Stream leakage in CompressorStream
 -

 Key: HADOOP-10526
 URL: https://issues.apache.org/jira/browse/HADOOP-10526
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.0
Reporter: SreeHari
Assignee: Rushabh S Shah
Priority: Minor
 Attachments: HDFS-2630-v2.patch, HDFS-2630.patch


 In CompressorStream.close , finish() can throw IOException . But out will not 
 be closed in that situation since it is not in finally 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9919) Update hadoop-metrics2.properties examples to Yarn

2014-04-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13975918#comment-13975918
 ] 

Hudson commented on HADOOP-9919:


SUCCESS: Integrated in Hadoop-trunk-Commit #5543 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5543/])
HADOOP-9919. Update hadoop-metrics2.properties examples to Yarn. Contributed by 
Akira AJISAKA. (suresh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1588943)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/conf/hadoop-metrics2.properties


 Update hadoop-metrics2.properties examples to Yarn
 --

 Key: HADOOP-9919
 URL: https://issues.apache.org/jira/browse/HADOOP-9919
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.1.0-beta
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Fix For: 2.5.0

 Attachments: HADOOP-9919.2.patch, HADOOP-9919.3.patch, 
 HADOOP-9919.4.patch, HADOOP-9919.patch


 The config for JobTracker and TaskTracker (comment outed) still exists in 
 hadoop-metrics2.properties as follows:
 {code}
 #jobtracker.sink.file_jvm.context=jvm
 #jobtracker.sink.file_jvm.filename=jobtracker-jvm-metrics.out
 #jobtracker.sink.file_mapred.context=mapred
 #jobtracker.sink.file_mapred.filename=jobtracker-mapred-metrics.out
 #tasktracker.sink.file.filename=tasktracker-metrics.out
 {code}
 These lines should be removed and a config for NodeManager should be added 
 instead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HADOOP-10526) Chance for Stream leakage in CompressorStream

2014-04-21 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13975919#comment-13975919
 ] 

Kihwal Lee edited comment on HADOOP-10526 at 4/21/14 7:16 PM:
--

I moved it from hdfs to common.
+1 The patch looks good.


was (Author: kihwal):
+1 The patch looks good.

 Chance for Stream leakage in CompressorStream
 -

 Key: HADOOP-10526
 URL: https://issues.apache.org/jira/browse/HADOOP-10526
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.0
Reporter: SreeHari
Assignee: Rushabh S Shah
Priority: Minor
 Attachments: HDFS-2630-v2.patch, HDFS-2630.patch


 In CompressorStream.close , finish() can throw IOException . But out will not 
 be closed in that situation since it is not in finally 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10526) Chance for Stream leakage in CompressorStream

2014-04-21 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-10526:


   Resolution: Fixed
Fix Version/s: 2.5.0
   3.0.0
 Release Note:   (was: Attaching patch.)
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed this to trunk and branch-2. Thanks for working on the patch, 
Rushabh.

 Chance for Stream leakage in CompressorStream
 -

 Key: HADOOP-10526
 URL: https://issues.apache.org/jira/browse/HADOOP-10526
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.0
Reporter: SreeHari
Assignee: Rushabh S Shah
Priority: Minor
 Fix For: 3.0.0, 2.5.0

 Attachments: HDFS-2630-v2.patch, HDFS-2630.patch


 In CompressorStream.close , finish() can throw IOException . But out will not 
 be closed in that situation since it is not in finally 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10522) JniBasedUnixGroupMapping mishandles errors

2014-04-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13975933#comment-13975933
 ] 

Hudson commented on HADOOP-10522:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #5544 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5544/])
HADOOP-10522. JniBasedUnixGroupMapping mishandles errors. Contributed by Kihwal 
Lee. (kihwal: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1588949)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsMapping.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/hadoop_group_info.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/hadoop_user_info.c


 JniBasedUnixGroupMapping mishandles errors
 --

 Key: HADOOP-10522
 URL: https://issues.apache.org/jira/browse/HADOOP-10522
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Critical
 Fix For: 3.0.0, 2.4.1

 Attachments: hadoop-10522.patch


 The mishandling of errors in the jni user-to-groups mapping modules can cause 
 segmentation faults in subsequent calls.  Here are the bugs:
 1) If {{hadoop_user_info_fetch()}} returns an error code that is not ENOENT, 
 the error may not be handled at all.  This bug was found by [~cnauroth].
 2)  In {{hadoop_user_info_fetch()}} and {{hadoop_group_info_fetch()}}, the 
 global {{errno}} is directly used. This is not thread-safe and could be the 
 cause of some failures that disappeared after enabling the big lookup lock.
 3) In the above methods, there is no limit on retries.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10526) Chance for Stream leakage in CompressorStream

2014-04-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13975934#comment-13975934
 ] 

Hudson commented on HADOOP-10526:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #5544 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5544/])
HADOOP-10526. Chance for Stream leakage in CompressorStream. Contributed by 
Rushabh Shah. (kihwal: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1588970)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/CompressorStream.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCompressorStream.java


 Chance for Stream leakage in CompressorStream
 -

 Key: HADOOP-10526
 URL: https://issues.apache.org/jira/browse/HADOOP-10526
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.0
Reporter: SreeHari
Assignee: Rushabh S Shah
Priority: Minor
 Fix For: 3.0.0, 2.5.0

 Attachments: HDFS-2630-v2.patch, HDFS-2630.patch


 In CompressorStream.close , finish() can throw IOException . But out will not 
 be closed in that situation since it is not in finally 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9648) Fix build native library on mac osx

2014-04-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13976057#comment-13976057
 ] 

Hadoop QA commented on HADOOP-9648:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12617363/HADOOP-9648.v2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The following test timeouts occurred in 
hadoop-common-project/hadoop-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:

org.apache.hadoop.http.TestHttpServerLifecycle

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3822//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3822//console

This message is automatically generated.

 Fix build native library on mac osx
 ---

 Key: HADOOP-9648
 URL: https://issues.apache.org/jira/browse/HADOOP-9648
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.4, 1.2.0, 1.1.2, 2.0.5-alpha
Reporter: Kirill A. Korinskiy
Assignee: Binglin Chang
 Attachments: HADOOP-9648-native-osx.1.0.4.patch, 
 HADOOP-9648-native-osx.1.1.2.patch, HADOOP-9648-native-osx.1.2.0.patch, 
 HADOOP-9648-native-osx.2.0.5-alpha-rc1.patch, HADOOP-9648.v2.patch


 Some patches for fixing build a hadoop native library on os x 10.7/10.8.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10522) JniBasedUnixGroupMapping mishandles errors

2014-04-21 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13976089#comment-13976089
 ] 

Kihwal Lee commented on HADOOP-10522:
-

Thanks for the feedback, [~cmccabe] 
bq. getgrgrid_r does not set errno (or at least is not documented to do so by 
POSIX).
The patch does not use errno. It explicitly use the return value.

bq. Try (*env)-Throw(env, newRuntimeException(getgrouplist: error looking up 
user. %d (%s), ret, terror(ret))) here instead.
I don't think it is correct to throw a RuntimeException when a user is not 
found (ENOENT).

bq.  Just because we get EINTR 5 times doesn't mean we should fail. 
Probably. I will fix that.

bq. Also, why are we increasing the buffer size when we get EINTR?
We don't.

Another bug in my patch is to return EIO when user/group is not found.  I will 
fix it and enable retries for EINTR forever in a separate jira. I will file one 
and attach a patch.

 JniBasedUnixGroupMapping mishandles errors
 --

 Key: HADOOP-10522
 URL: https://issues.apache.org/jira/browse/HADOOP-10522
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Critical
 Fix For: 3.0.0, 2.4.1

 Attachments: hadoop-10522.patch


 The mishandling of errors in the jni user-to-groups mapping modules can cause 
 segmentation faults in subsequent calls.  Here are the bugs:
 1) If {{hadoop_user_info_fetch()}} returns an error code that is not ENOENT, 
 the error may not be handled at all.  This bug was found by [~cnauroth].
 2)  In {{hadoop_user_info_fetch()}} and {{hadoop_group_info_fetch()}}, the 
 global {{errno}} is directly used. This is not thread-safe and could be the 
 cause of some failures that disappeared after enabling the big lookup lock.
 3) In the above methods, there is no limit on retries.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10527) Fix incorrect return code and allow more retries on EINTR

2014-04-21 Thread Kihwal Lee (JIRA)
Kihwal Lee created HADOOP-10527:
---

 Summary: Fix incorrect return code and allow more retries on EINTR
 Key: HADOOP-10527
 URL: https://issues.apache.org/jira/browse/HADOOP-10527
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kihwal Lee


After HADOOP-10522, user/group look-up will only try up to 5 times on EINTR.  
More retries should be allowed just in case.

Also, when a user/group lookup returns no entries, the wrapper methods are 
returning EIO, instead of ENOENT.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10527) Fix incorrect return code and allow more retries on EINTR

2014-04-21 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-10527:


Target Version/s: 2.4.1
  Status: Patch Available  (was: Open)

 Fix incorrect return code and allow more retries on EINTR
 -

 Key: HADOOP-10527
 URL: https://issues.apache.org/jira/browse/HADOOP-10527
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: hadoop-10527.patch


 After HADOOP-10522, user/group look-up will only try up to 5 times on EINTR.  
 More retries should be allowed just in case.
 Also, when a user/group lookup returns no entries, the wrapper methods are 
 returning EIO, instead of ENOENT.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HADOOP-10527) Fix incorrect return code and allow more retries on EINTR

2014-04-21 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee reassigned HADOOP-10527:
---

Assignee: Kihwal Lee

 Fix incorrect return code and allow more retries on EINTR
 -

 Key: HADOOP-10527
 URL: https://issues.apache.org/jira/browse/HADOOP-10527
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: hadoop-10527.patch


 After HADOOP-10522, user/group look-up will only try up to 5 times on EINTR.  
 More retries should be allowed just in case.
 Also, when a user/group lookup returns no entries, the wrapper methods are 
 returning EIO, instead of ENOENT.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10527) Fix incorrect return code and allow more retries on EINTR

2014-04-21 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-10527:


Attachment: hadoop-10527.patch

 Fix incorrect return code and allow more retries on EINTR
 -

 Key: HADOOP-10527
 URL: https://issues.apache.org/jira/browse/HADOOP-10527
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kihwal Lee
 Attachments: hadoop-10527.patch


 After HADOOP-10522, user/group look-up will only try up to 5 times on EINTR.  
 More retries should be allowed just in case.
 Also, when a user/group lookup returns no entries, the wrapper methods are 
 returning EIO, instead of ENOENT.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10522) JniBasedUnixGroupMapping mishandles errors

2014-04-21 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13976110#comment-13976110
 ] 

Kihwal Lee commented on HADOOP-10522:
-

Filed HADOOP-10527 and attached a patch.

 JniBasedUnixGroupMapping mishandles errors
 --

 Key: HADOOP-10522
 URL: https://issues.apache.org/jira/browse/HADOOP-10522
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Critical
 Fix For: 3.0.0, 2.4.1

 Attachments: hadoop-10522.patch


 The mishandling of errors in the jni user-to-groups mapping modules can cause 
 segmentation faults in subsequent calls.  Here are the bugs:
 1) If {{hadoop_user_info_fetch()}} returns an error code that is not ENOENT, 
 the error may not be handled at all.  This bug was found by [~cnauroth].
 2)  In {{hadoop_user_info_fetch()}} and {{hadoop_group_info_fetch()}}, the 
 global {{errno}} is directly used. This is not thread-safe and could be the 
 cause of some failures that disappeared after enabling the big lookup lock.
 3) In the above methods, there is no limit on retries.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HADOOP-10522) JniBasedUnixGroupMapping mishandles errors

2014-04-21 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13976089#comment-13976089
 ] 

Kihwal Lee edited comment on HADOOP-10522 at 4/21/14 10:10 PM:
---

Thanks for the feedback, [~cmccabe] 
bq. getgrgrid_r does not set errno (or at least is not documented to do so by 
POSIX).
The patch does not use errno. It explicitly use the return value. The code 
before my patch used errno.

bq. Try (*env)-Throw(env, newRuntimeException(getgrouplist: error looking up 
user. %d (%s), ret, terror(ret))) here instead.
I don't think it is correct to throw a RuntimeException when a user is not 
found (ENOENT).

bq.  Just because we get EINTR 5 times doesn't mean we should fail. 
Probably. I will fix that.

bq. Also, why are we increasing the buffer size when we get EINTR?
We don't.

Another bug in my patch is to return EIO when user/group is not found.  I will 
fix it and enable retries for EINTR forever in a separate jira. I will file one 
and attach a patch.


was (Author: kihwal):
Thanks for the feedback, [~cmccabe] 
bq. getgrgrid_r does not set errno (or at least is not documented to do so by 
POSIX).
The patch does not use errno. It explicitly use the return value.

bq. Try (*env)-Throw(env, newRuntimeException(getgrouplist: error looking up 
user. %d (%s), ret, terror(ret))) here instead.
I don't think it is correct to throw a RuntimeException when a user is not 
found (ENOENT).

bq.  Just because we get EINTR 5 times doesn't mean we should fail. 
Probably. I will fix that.

bq. Also, why are we increasing the buffer size when we get EINTR?
We don't.

Another bug in my patch is to return EIO when user/group is not found.  I will 
fix it and enable retries for EINTR forever in a separate jira. I will file one 
and attach a patch.

 JniBasedUnixGroupMapping mishandles errors
 --

 Key: HADOOP-10522
 URL: https://issues.apache.org/jira/browse/HADOOP-10522
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Critical
 Fix For: 3.0.0, 2.4.1

 Attachments: hadoop-10522.patch


 The mishandling of errors in the jni user-to-groups mapping modules can cause 
 segmentation faults in subsequent calls.  Here are the bugs:
 1) If {{hadoop_user_info_fetch()}} returns an error code that is not ENOENT, 
 the error may not be handled at all.  This bug was found by [~cnauroth].
 2)  In {{hadoop_user_info_fetch()}} and {{hadoop_group_info_fetch()}}, the 
 global {{errno}} is directly used. This is not thread-safe and could be the 
 cause of some failures that disappeared after enabling the big lookup lock.
 3) In the above methods, there is no limit on retries.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10522) JniBasedUnixGroupMapping mishandles errors

2014-04-21 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13976116#comment-13976116
 ] 

Colin Patrick McCabe commented on HADOOP-10522:
---

Thanks for filing the follow-up, Kihwal.  I'll move my responses there.

 JniBasedUnixGroupMapping mishandles errors
 --

 Key: HADOOP-10522
 URL: https://issues.apache.org/jira/browse/HADOOP-10522
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Critical
 Fix For: 3.0.0, 2.4.1

 Attachments: hadoop-10522.patch


 The mishandling of errors in the jni user-to-groups mapping modules can cause 
 segmentation faults in subsequent calls.  Here are the bugs:
 1) If {{hadoop_user_info_fetch()}} returns an error code that is not ENOENT, 
 the error may not be handled at all.  This bug was found by [~cnauroth].
 2)  In {{hadoop_user_info_fetch()}} and {{hadoop_group_info_fetch()}}, the 
 global {{errno}} is directly used. This is not thread-safe and could be the 
 cause of some failures that disappeared after enabling the big lookup lock.
 3) In the above methods, there is no limit on retries.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10527) Fix incorrect return code and allow more retries on EINTR

2014-04-21 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13976130#comment-13976130
 ] 

Colin Patrick McCabe commented on HADOOP-10527:
---

Kihwal wrote:

bq. I don't think it is correct to throw a RuntimeException when a user is not 
found (ENOENT).

We can't throw an IOException because the JNI function does not declare 
{{throws IOException}}  Throwing an exception not in the throw spec causes 
undefined behavior.  If you want to change the throw spec, that would also work.

You can see in JniBasedUnixGroupsMapping.java that the JNI function is declared 
like this:
{code}
  native static String[] getGroupsForUser(String username);
{code}

(There is no {{throws IOException}})

bq. The patch does not use errno. It explicitly use the return value. The code 
before my patch used errno.

OK, I missed the part where you removed this line:
{code}
-} while ((!pwd)  (errno == EINTR));
{code}

That was indeed a good line to remove.

However, your patch adds these lines:
{code}
+// The following call returns errno. Reading the global errno wihtout
+// locking is not thread-safe.
{code}

This is doubly wrong because
* {{getpwnam_r}} doesn't set errno -- it returns the error code as a result
* {{errno}} is not global and is always thread-safe

So we need to remove this comment.

I think we need to get rid of {{MAX_USER_LOOKUP_TRIES}} and replace it with a 
maximum buffer size that we don't go over when we get {{ERANGE}}.  The problem 
with the current scheme is that it doesn't actually put a limit on buffer size 
since we don't consider this in our decision to multiply the buffer size by 2x 
or not.

{code}
+  char buf[128];
+  snprintf(buf, sizeof(buf), getgrouplist: error looking up user. %d 
(%s),
+   ret, terror(ret));
+  THROW(env, java/lang/RuntimeException, buf);
{code}

Should use either {{newRuntimeException}} or {{newIOException}} here.

 Fix incorrect return code and allow more retries on EINTR
 -

 Key: HADOOP-10527
 URL: https://issues.apache.org/jira/browse/HADOOP-10527
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: hadoop-10527.patch


 After HADOOP-10522, user/group look-up will only try up to 5 times on EINTR.  
 More retries should be allowed just in case.
 Also, when a user/group lookup returns no entries, the wrapper methods are 
 returning EIO, instead of ENOENT.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10527) Fix incorrect return code and allow more retries on EINTR

2014-04-21 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-10527:


Status: Open  (was: Patch Available)

 Fix incorrect return code and allow more retries on EINTR
 -

 Key: HADOOP-10527
 URL: https://issues.apache.org/jira/browse/HADOOP-10527
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: hadoop-10527.patch


 After HADOOP-10522, user/group look-up will only try up to 5 times on EINTR.  
 More retries should be allowed just in case.
 Also, when a user/group lookup returns no entries, the wrapper methods are 
 returning EIO, instead of ENOENT.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10527) Fix incorrect return code and allow more retries on EINTR

2014-04-21 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13976146#comment-13976146
 ] 

Kihwal Lee commented on HADOOP-10527:
-

bq. We can't throw an IOException because ...
I am not suggesting that. If a lookup successfully returns no result, it is 
supposed to return an empty collection without throwing any exception.

bq. So we need to remove this comment.
Agreed. 

bq. I think we need to get rid of MAX_USER_LOOKUP_TRIES and replace it with a 
maximum buffer size
What do you think is the reasonable buffer size?

bq. Should use either newRuntimeException or newIOException here.
I think I misread your earlier comment in HADOOP-10522. I understand now. 

 Fix incorrect return code and allow more retries on EINTR
 -

 Key: HADOOP-10527
 URL: https://issues.apache.org/jira/browse/HADOOP-10527
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: hadoop-10527.patch


 After HADOOP-10522, user/group look-up will only try up to 5 times on EINTR.  
 More retries should be allowed just in case.
 Also, when a user/group lookup returns no entries, the wrapper methods are 
 returning EIO, instead of ENOENT.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10527) Fix incorrect return code and allow more retries on EINTR

2014-04-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13976153#comment-13976153
 ] 

Hadoop QA commented on HADOOP-10527:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12641130/hadoop-10527.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3823//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3823//console

This message is automatically generated.

 Fix incorrect return code and allow more retries on EINTR
 -

 Key: HADOOP-10527
 URL: https://issues.apache.org/jira/browse/HADOOP-10527
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: hadoop-10527.patch


 After HADOOP-10522, user/group look-up will only try up to 5 times on EINTR.  
 More retries should be allowed just in case.
 Also, when a user/group lookup returns no entries, the wrapper methods are 
 returning EIO, instead of ENOENT.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10527) Fix incorrect return code and allow more retries on EINTR

2014-04-21 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-10527:


Status: Patch Available  (was: Open)

 Fix incorrect return code and allow more retries on EINTR
 -

 Key: HADOOP-10527
 URL: https://issues.apache.org/jira/browse/HADOOP-10527
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: hadoop-10527.patch


 After HADOOP-10522, user/group look-up will only try up to 5 times on EINTR.  
 More retries should be allowed just in case.
 Also, when a user/group lookup returns no entries, the wrapper methods are 
 returning EIO, instead of ENOENT.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10527) Fix incorrect return code and allow more retries on EINTR

2014-04-21 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-10527:


Attachment: (was: hadoop-10527.patch)

 Fix incorrect return code and allow more retries on EINTR
 -

 Key: HADOOP-10527
 URL: https://issues.apache.org/jira/browse/HADOOP-10527
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: hadoop-10527.patch


 After HADOOP-10522, user/group look-up will only try up to 5 times on EINTR.  
 More retries should be allowed just in case.
 Also, when a user/group lookup returns no entries, the wrapper methods are 
 returning EIO, instead of ENOENT.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10527) Fix incorrect return code and allow more retries on EINTR

2014-04-21 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-10527:


Attachment: hadoop-10527.patch

The new patch sets the buffer size limit to 64K.

 Fix incorrect return code and allow more retries on EINTR
 -

 Key: HADOOP-10527
 URL: https://issues.apache.org/jira/browse/HADOOP-10527
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: hadoop-10527.patch


 After HADOOP-10522, user/group look-up will only try up to 5 times on EINTR.  
 More retries should be allowed just in case.
 Also, when a user/group lookup returns no entries, the wrapper methods are 
 returning EIO, instead of ENOENT.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10527) Fix incorrect return code and allow more retries on EINTR

2014-04-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13976223#comment-13976223
 ] 

Hadoop QA commented on HADOOP-10527:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12641147/hadoop-10527.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3824//console

This message is automatically generated.

 Fix incorrect return code and allow more retries on EINTR
 -

 Key: HADOOP-10527
 URL: https://issues.apache.org/jira/browse/HADOOP-10527
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: hadoop-10527.patch


 After HADOOP-10522, user/group look-up will only try up to 5 times on EINTR.  
 More retries should be allowed just in case.
 Also, when a user/group lookup returns no entries, the wrapper methods are 
 returning EIO, instead of ENOENT.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10528) A TokenKeyProvider for a Centralized Key Manager Server (BEE: bee-key-manager)

2014-04-21 Thread howie yu (JIRA)
howie yu created HADOOP-10528:
-

 Summary: A TokenKeyProvider for a Centralized Key Manager Server 
(BEE: bee-key-manager)
 Key: HADOOP-10528
 URL: https://issues.apache.org/jira/browse/HADOOP-10528
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Reporter: howie yu


This is a key provider based on HADOOP-9331. HADOOP-9331 has designed a 
complete Hadoop crypto codec framework, but the key can only be retrieved from 
a local Java KeyStore file. To the convenience, we design a Centralized Key 
Manager Server (BEE: bee-key-manager) and user can use this TokenKeyProvider to 
retrieve keys from the Centralized Key Manager Server. By the way, to secure 
the key exchange, we leverage HTTPS + SPNego/SASL to protect the key exchange. 
To the detail design and usage, please refer to 
https://github.com/trendmicro/BEE. 

Moreover, there are still much more requests about Hadoop Data Encryption (such 
as provide standalone module, support KMIP...etc.), if anyone has interested in 
those features, pleas let us know. 
 




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10529) A TokenKeyProvider for a Centralized Key Manager Server (BEE: bee-key-manager)

2014-04-21 Thread howie yu (JIRA)
howie yu created HADOOP-10529:
-

 Summary: A TokenKeyProvider for a Centralized Key Manager Server 
(BEE: bee-key-manager)
 Key: HADOOP-10529
 URL: https://issues.apache.org/jira/browse/HADOOP-10529
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Reporter: howie yu


This is a key provider based on HADOOP-9331. HADOOP-9331 has designed a 
complete Hadoop crypto codec framework, but the key can only be retrieved from 
a local Java KeyStore file. To the convenience, we design a Centralized Key 
Manager Server (BEE: bee-key-manager) and user can use this TokenKeyProvider to 
retrieve keys from the Centralized Key Manager Server. By the way, to secure 
the key exchange, we leverage HTTPS + SPNego/SASL to protect the key exchange. 
To the detail design and usage, please refer to 
https://github.com/trendmicro/BEE. 

Moreover, there are still much more requests about Hadoop Data Encryption (such 
as provide standalone module, support KMIP...etc.), if anyone has interested in 
those features, pleas let us know. 
 




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10529) A TokenKeyProvider for a Centralized Key Manager Server (BEE: bee-key-manager)

2014-04-21 Thread howie yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

howie yu resolved HADOOP-10529.
---

Resolution: Duplicate

 A TokenKeyProvider for a Centralized Key Manager Server (BEE: bee-key-manager)
 --

 Key: HADOOP-10529
 URL: https://issues.apache.org/jira/browse/HADOOP-10529
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Reporter: howie yu
  Labels: patch

 This is a key provider based on HADOOP-9331. HADOOP-9331 has designed a 
 complete Hadoop crypto codec framework, but the key can only be retrieved 
 from a local Java KeyStore file. To the convenience, we design a Centralized 
 Key Manager Server (BEE: bee-key-manager) and user can use this 
 TokenKeyProvider to retrieve keys from the Centralized Key Manager Server. By 
 the way, to secure the key exchange, we leverage HTTPS + SPNego/SASL to 
 protect the key exchange. To the detail design and usage, please refer to 
 https://github.com/trendmicro/BEE. 
 Moreover, there are still much more requests about Hadoop Data Encryption 
 (such as provide standalone module, support KMIP...etc.), if anyone has 
 interested in those features, pleas let us know. 
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10528) A TokenKeyProvider for a Centralized Key Manager Server (BEE: bee-key-manager)

2014-04-21 Thread howie yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

howie yu updated HADOOP-10528:
--

Attachment: HADOOP-10528.patch

HADOOP-10528.patch

 A TokenKeyProvider for a Centralized Key Manager Server (BEE: bee-key-manager)
 --

 Key: HADOOP-10528
 URL: https://issues.apache.org/jira/browse/HADOOP-10528
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Reporter: howie yu
 Attachments: HADOOP-10528.patch


 This is a key provider based on HADOOP-9331. HADOOP-9331 has designed a 
 complete Hadoop crypto codec framework, but the key can only be retrieved 
 from a local Java KeyStore file. To the convenience, we design a Centralized 
 Key Manager Server (BEE: bee-key-manager) and user can use this 
 TokenKeyProvider to retrieve keys from the Centralized Key Manager Server. By 
 the way, to secure the key exchange, we leverage HTTPS + SPNego/SASL to 
 protect the key exchange. To the detail design and usage, please refer to 
 https://github.com/trendmicro/BEE. 
 Moreover, there are still much more requests about Hadoop Data Encryption 
 (such as provide standalone module, support KMIP...etc.), if anyone has 
 interested in those features, pleas let us know. 
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10528) A TokenKeyProvider for a Centralized Key Manager Server (BEE: bee-key-manager)

2014-04-21 Thread howie yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

howie yu updated HADOOP-10528:
--

Description: 
This is a key provider based on HADOOP-9331. HADOOP-9331 has designed a 
complete Hadoop crypto codec framework, but the key can only be retrieved from 
a local Java KeyStore file. To the convenience, we design a Centralized Key 
Manager Server (BEE: bee-key-manager) and user can use this TokenKeyProvider to 
retrieve keys from the Centralized Key Manager Server. By the way, to secure 
the key exchange, we leverage HTTPS + SPNego/SASL to protect the key exchange. 
To the detail design and usage, please refer to 
https://github.com/trendmicro/BEE. 

Moreover, there are still much more requests about Hadoop Data Encryption (such 
as provide standalone module, support KMIP...etc.), if anyone has interested in 
those features, pleas let us know. 
 
Ps. Because this patch baesd on HADOOP-9331 and , before use 


  was:
This is a key provider based on HADOOP-9331. HADOOP-9331 has designed a 
complete Hadoop crypto codec framework, but the key can only be retrieved from 
a local Java KeyStore file. To the convenience, we design a Centralized Key 
Manager Server (BEE: bee-key-manager) and user can use this TokenKeyProvider to 
retrieve keys from the Centralized Key Manager Server. By the way, to secure 
the key exchange, we leverage HTTPS + SPNego/SASL to protect the key exchange. 
To the detail design and usage, please refer to 
https://github.com/trendmicro/BEE. 

Moreover, there are still much more requests about Hadoop Data Encryption (such 
as provide standalone module, support KMIP...etc.), if anyone has interested in 
those features, pleas let us know. 
 



 A TokenKeyProvider for a Centralized Key Manager Server (BEE: bee-key-manager)
 --

 Key: HADOOP-10528
 URL: https://issues.apache.org/jira/browse/HADOOP-10528
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Reporter: howie yu
 Attachments: HADOOP-10528.patch


 This is a key provider based on HADOOP-9331. HADOOP-9331 has designed a 
 complete Hadoop crypto codec framework, but the key can only be retrieved 
 from a local Java KeyStore file. To the convenience, we design a Centralized 
 Key Manager Server (BEE: bee-key-manager) and user can use this 
 TokenKeyProvider to retrieve keys from the Centralized Key Manager Server. By 
 the way, to secure the key exchange, we leverage HTTPS + SPNego/SASL to 
 protect the key exchange. To the detail design and usage, please refer to 
 https://github.com/trendmicro/BEE. 
 Moreover, there are still much more requests about Hadoop Data Encryption 
 (such as provide standalone module, support KMIP...etc.), if anyone has 
 interested in those features, pleas let us know. 
  
 Ps. Because this patch baesd on HADOOP-9331 and , before use 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10528) A TokenKeyProvider for a Centralized Key Manager Server (BEE: bee-key-manager)

2014-04-21 Thread howie yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

howie yu updated HADOOP-10528:
--

Description: 
This is a key provider based on HADOOP-9331. HADOOP-9331 has designed a 
complete Hadoop crypto codec framework, but the key can only be retrieved from 
a local Java KeyStore file. To the convenience, we design a Centralized Key 
Manager Server (BEE: bee-key-manager) and user can use this TokenKeyProvider to 
retrieve keys from the Centralized Key Manager Server. By the way, to secure 
the key exchange, we leverage HTTPS + SPNego/SASL to protect the key exchange. 
To the detail design and usage, please refer to 
https://github.com/trendmicro/BEE. 

Moreover, there are still much more requests about Hadoop Data Encryption (such 
as provide standalone module, support KMIP...etc.), if anyone has interested in 
those features, pleas let us know. 
 
Ps. Because this patch based on HADOOP-9331, please use patch HADOOP-9333, and 
HADOOP-9332 and before use our patch HADOOP-10528.patch.





  was:
This is a key provider based on HADOOP-9331. HADOOP-9331 has designed a 
complete Hadoop crypto codec framework, but the key can only be retrieved from 
a local Java KeyStore file. To the convenience, we design a Centralized Key 
Manager Server (BEE: bee-key-manager) and user can use this TokenKeyProvider to 
retrieve keys from the Centralized Key Manager Server. By the way, to secure 
the key exchange, we leverage HTTPS + SPNego/SASL to protect the key exchange. 
To the detail design and usage, please refer to 
https://github.com/trendmicro/BEE. 

Moreover, there are still much more requests about Hadoop Data Encryption (such 
as provide standalone module, support KMIP...etc.), if anyone has interested in 
those features, pleas let us know. 
 
Ps. Because this patch baesd on HADOOP-9331 and , before use 



 A TokenKeyProvider for a Centralized Key Manager Server (BEE: bee-key-manager)
 --

 Key: HADOOP-10528
 URL: https://issues.apache.org/jira/browse/HADOOP-10528
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Reporter: howie yu
 Attachments: HADOOP-10528.patch


 This is a key provider based on HADOOP-9331. HADOOP-9331 has designed a 
 complete Hadoop crypto codec framework, but the key can only be retrieved 
 from a local Java KeyStore file. To the convenience, we design a Centralized 
 Key Manager Server (BEE: bee-key-manager) and user can use this 
 TokenKeyProvider to retrieve keys from the Centralized Key Manager Server. By 
 the way, to secure the key exchange, we leverage HTTPS + SPNego/SASL to 
 protect the key exchange. To the detail design and usage, please refer to 
 https://github.com/trendmicro/BEE. 
 Moreover, there are still much more requests about Hadoop Data Encryption 
 (such as provide standalone module, support KMIP...etc.), if anyone has 
 interested in those features, pleas let us know. 
  
 Ps. Because this patch based on HADOOP-9331, please use patch HADOOP-9333, 
 and HADOOP-9332 and before use our patch HADOOP-10528.patch.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10528) A TokenKeyProvider for a Centralized Key Manager Server (BEE: bee-key-manager)

2014-04-21 Thread howie yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

howie yu updated HADOOP-10528:
--

Status: Patch Available  (was: Open)

 A TokenKeyProvider for a Centralized Key Manager Server (BEE: bee-key-manager)
 --

 Key: HADOOP-10528
 URL: https://issues.apache.org/jira/browse/HADOOP-10528
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Reporter: howie yu
 Attachments: HADOOP-10528.patch


 This is a key provider based on HADOOP-9331. HADOOP-9331 has designed a 
 complete Hadoop crypto codec framework, but the key can only be retrieved 
 from a local Java KeyStore file. To the convenience, we design a Centralized 
 Key Manager Server (BEE: bee-key-manager) and user can use this 
 TokenKeyProvider to retrieve keys from the Centralized Key Manager Server. By 
 the way, to secure the key exchange, we leverage HTTPS + SPNego/SASL to 
 protect the key exchange. To the detail design and usage, please refer to 
 https://github.com/trendmicro/BEE. 
 Moreover, there are still much more requests about Hadoop Data Encryption 
 (such as provide standalone module, support KMIP...etc.), if anyone has 
 interested in those features, pleas let us know. 
  
 Ps. Because this patch based on HADOOP-9331, please use patch HADOOP-9333, 
 and HADOOP-9332 and before use our patch HADOOP-10528.patch.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10528) A TokenKeyProvider for a Centralized Key Manager Server (BEE: bee-key-manager)

2014-04-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13976400#comment-13976400
 ] 

Hadoop QA commented on HADOOP-10528:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12641173/HADOOP-10528.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3825//console

This message is automatically generated.

 A TokenKeyProvider for a Centralized Key Manager Server (BEE: bee-key-manager)
 --

 Key: HADOOP-10528
 URL: https://issues.apache.org/jira/browse/HADOOP-10528
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Reporter: howie yu
 Attachments: HADOOP-10528.patch


 This is a key provider based on HADOOP-9331. HADOOP-9331 has designed a 
 complete Hadoop crypto codec framework, but the key can only be retrieved 
 from a local Java KeyStore file. To the convenience, we design a Centralized 
 Key Manager Server (BEE: bee-key-manager) and user can use this 
 TokenKeyProvider to retrieve keys from the Centralized Key Manager Server. By 
 the way, to secure the key exchange, we leverage HTTPS + SPNego/SASL to 
 protect the key exchange. To the detail design and usage, please refer to 
 https://github.com/trendmicro/BEE. 
 Moreover, there are still much more requests about Hadoop Data Encryption 
 (such as provide standalone module, support KMIP...etc.), if anyone has 
 interested in those features, pleas let us know. 
  
 Ps. Because this patch based on HADOOP-9331, please use patch HADOOP-9333, 
 and HADOOP-9332 and before use our patch HADOOP-10528.patch.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10503) Move junit up to v 4.11

2014-04-21 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-10503:
---

Attachment: HADOOP-10503.5.patch

The dummy patch encountered the same problem on Jenkins:

https://builds.apache.org/job/PreCommit-HADOOP-Build/3821/

This corroborates my theory that it's a bug in our automation rather than a 
problem specific to JUnit 4.11.

At this point, the preparatory changes for HDFS, YARN, and MapReduce have been 
committed.  I'm uploading patch v5 here now, which is just the remaining Hadoop 
Common changes and the version number upgrade in the pom.xml files.

 Move junit up to v 4.11
 ---

 Key: HADOOP-10503
 URL: https://issues.apache.org/jira/browse/HADOOP-10503
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Affects Versions: 2.4.0
Reporter: Steve Loughran
Assignee: Chris Nauroth
Priority: Minor
 Attachments: HADOOP-10503.1.patch, HADOOP-10503.2.patch, 
 HADOOP-10503.3.patch, HADOOP-10503.4.patch, HADOOP-10503.5.patch, 
 HADOOP-10503.dummy.patch


 JUnit 4.11 has been out for a while; other projects are happy with it, so 
 update it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10528) A TokenKeyProvider for a Centralized Key Manager Server (BEE: bee-key-manager)

2014-04-21 Thread howie yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

howie yu updated HADOOP-10528:
--

Status: Open  (was: Patch Available)

 A TokenKeyProvider for a Centralized Key Manager Server (BEE: bee-key-manager)
 --

 Key: HADOOP-10528
 URL: https://issues.apache.org/jira/browse/HADOOP-10528
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Reporter: howie yu
 Attachments: HADOOP-10528.patch


 This is a key provider based on HADOOP-9331. HADOOP-9331 has designed a 
 complete Hadoop crypto codec framework, but the key can only be retrieved 
 from a local Java KeyStore file. To the convenience, we design a Centralized 
 Key Manager Server (BEE: bee-key-manager) and user can use this 
 TokenKeyProvider to retrieve keys from the Centralized Key Manager Server. By 
 the way, to secure the key exchange, we leverage HTTPS + SPNego/SASL to 
 protect the key exchange. To the detail design and usage, please refer to 
 https://github.com/trendmicro/BEE. 
 Moreover, there are still much more requests about Hadoop Data Encryption 
 (such as provide standalone module, support KMIP...etc.), if anyone has 
 interested in those features, pleas let us know. 
  
 Ps. Because this patch based on HADOOP-9331, please use patch HADOOP-9333, 
 and HADOOP-9332 and before use our patch HADOOP-10528.patch.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10528) A TokenKeyProvider for a Centralized Key Manager Server (BEE: bee-key-manager)

2014-04-21 Thread howie yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

howie yu updated HADOOP-10528:
--

Status: Patch Available  (was: Open)

 A TokenKeyProvider for a Centralized Key Manager Server (BEE: bee-key-manager)
 --

 Key: HADOOP-10528
 URL: https://issues.apache.org/jira/browse/HADOOP-10528
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Reporter: howie yu
 Attachments: HADOOP-10528.patch


 This is a key provider based on HADOOP-9331. HADOOP-9331 has designed a 
 complete Hadoop crypto codec framework, but the key can only be retrieved 
 from a local Java KeyStore file. To the convenience, we design a Centralized 
 Key Manager Server (BEE: bee-key-manager) and user can use this 
 TokenKeyProvider to retrieve keys from the Centralized Key Manager Server. By 
 the way, to secure the key exchange, we leverage HTTPS + SPNego/SASL to 
 protect the key exchange. To the detail design and usage, please refer to 
 https://github.com/trendmicro/BEE. 
 Moreover, there are still much more requests about Hadoop Data Encryption 
 (such as provide standalone module, support KMIP...etc.), if anyone has 
 interested in those features, pleas let us know. 
  
 Ps. Because this patch based on HADOOP-9331, please use patch HADOOP-9333, 
 and HADOOP-9332 and before use our patch HADOOP-10528.patch.



--
This message was sent by Atlassian JIRA
(v6.2#6252)