[jira] [Updated] (HADOOP-10365) BufferedOutputStream in FileUtil#unpackEntries() should be closed in finally block

2014-05-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HADOOP-10365:


Component/s: util

> BufferedOutputStream in FileUtil#unpackEntries() should be closed in finally 
> block
> --
>
> Key: HADOOP-10365
> URL: https://issues.apache.org/jira/browse/HADOOP-10365
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Reporter: Ted Yu
>Priority: Minor
>
> {code}
> BufferedOutputStream outputStream = new BufferedOutputStream(
> new FileOutputStream(outputFile));
> ...
> outputStream.flush();
> outputStream.close();
> {code}
> outputStream should be closed in finally block.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10587) Use a thread-local cache in TokenIdentifier#getBytes to avoid creating many DataOutputBuffer objects

2014-05-11 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-10587:
--

Attachment: HADOOP-10587.001.patch

> Use a thread-local cache in TokenIdentifier#getBytes to avoid creating many 
> DataOutputBuffer objects
> 
>
> Key: HADOOP-10587
> URL: https://issues.apache.org/jira/browse/HADOOP-10587
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.4.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HADOOP-10587.001.patch
>
>
> We can use a thread-local cache in TokenIdentifier#getBytes to avoid creating 
> many DataOutputBuffer objects.  This will reduce our memory usage (for 
> example, when loading edit logs), and help prevent OOMs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10564) Add username to native RPCv9 client

2014-05-11 Thread Abraham Elmahrek (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13994751#comment-13994751
 ] 

Abraham Elmahrek commented on HADOOP-10564:
---

Looking better.

{code}
ret = ENOMEM;
{code}
In hadoop-native-core/common/user.c seems like it should be ERANGE? It seems 
like it's checking an upper boundary.

> Add username to native RPCv9 client
> ---
>
> Key: HADOOP-10564
> URL: https://issues.apache.org/jira/browse/HADOOP-10564
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: native
>Affects Versions: HADOOP-10388
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HADOOP-10564-pnative.002.patch, 
> HADOOP-10564-pnative.003.patch, HADOOP-10564.001.patch
>
>
> Add the ability for the native RPCv9 client to set a username when initiating 
> a connection.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10158) SPNEGO should work with multiple interfaces/SPNs.

2014-05-11 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-10158:


   Resolution: Fixed
Fix Version/s: 2.5.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2.

> SPNEGO should work with multiple interfaces/SPNs.
> -
>
> Key: HADOOP-10158
> URL: https://issues.apache.org/jira/browse/HADOOP-10158
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Kihwal Lee
>Assignee: Daryn Sharp
>Priority: Critical
> Fix For: 2.5.0
>
> Attachments: HADOOP-10158-readkeytab.patch, 
> HADOOP-10158-readkeytab.patch, HADOOP-10158.patch, HADOOP-10158.patch, 
> HADOOP-10158.patch, HADOOP-10158_multiplerealms.patch, 
> HADOOP-10158_multiplerealms.patch, HADOOP-10158_multiplerealms.patch
>
>
> This is the list of internal servlets added by namenode.
> | Name | Auth | Need to be accessible by end users |
> | StartupProgressServlet | none | no |
> | GetDelegationTokenServlet | internal SPNEGO | yes |
> | RenewDelegationTokenServlet | internal SPNEGO | yes |
> |  CancelDelegationTokenServlet | internal SPNEGO | yes |
> |  FsckServlet | internal SPNEGO | yes |
> |  GetImageServlet | internal SPNEGO | no |
> |  ListPathsServlet | token in query | yes |
> |  FileDataServlet | token in query | yes |
> |  FileChecksumServlets | token in query | yes |
> | ContentSummaryServlet | token in query | yes |
> GetDelegationTokenServlet, RenewDelegationTokenServlet, 
> CancelDelegationTokenServlet and FsckServlet are accessed by end users, but 
> hard-coded to use the internal SPNEGO filter.
> If a name node HTTP server binds to multiple external IP addresses, the 
> internal SPNEGO service principal name may not work with an address to which 
> end users are connecting.  The current SPNEGO implementation in Hadoop is 
> limited to use a single service principal per filter.
> If the underlying hadoop kerberos authentication handler cannot easily be 
> modified, we can at least create a separate auth filter for the end-user 
> facing servlets so that their service principals can be independently 
> configured. If not defined, it should fall back to the current behavior.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10541) InputStream in MiniKdc#initKDCServer for minikdc.ldiff is not closed

2014-05-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13992825#comment-13992825
 ] 

Hudson commented on HADOOP-10541:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1777 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1777/])
HADOOP-10541. InputStream in MiniKdc#initKDCServer for minikdc.ldiff is not 
closed. Contributed by Swarnim Kulkarni. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1592803)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/hadoop/minikdc/MiniKdc.java


> InputStream in MiniKdc#initKDCServer for minikdc.ldiff is not closed
> 
>
> Key: HADOOP-10541
> URL: https://issues.apache.org/jira/browse/HADOOP-10541
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Ted Yu
>Assignee: Swarnim Kulkarni
>Priority: Minor
> Fix For: 3.0.0, 2.5.0
>
> Attachments: HADOOP-10541.1.patch.txt, HADOOP-10541.2.patch.txt, 
> HADOOP-10541.3.patch.txt, HADOOP-10541.4.patch
>
>
> The same InputStream variable is used for minikdc.ldiff and minikdc-krb5.conf 
> :
> {code}
> InputStream is = cl.getResourceAsStream("minikdc.ldiff");
> ...
> is = cl.getResourceAsStream("minikdc-krb5.conf");
> {code}
> Before the second assignment, is should be closed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10541) InputStream in MiniKdc#initKDCServer for minikdc.ldiff is not closed

2014-05-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13992789#comment-13992789
 ] 

Hudson commented on HADOOP-10541:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #1751 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1751/])
HADOOP-10541. InputStream in MiniKdc#initKDCServer for minikdc.ldiff is not 
closed. Contributed by Swarnim Kulkarni. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1592803)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/hadoop/minikdc/MiniKdc.java


> InputStream in MiniKdc#initKDCServer for minikdc.ldiff is not closed
> 
>
> Key: HADOOP-10541
> URL: https://issues.apache.org/jira/browse/HADOOP-10541
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Ted Yu
>Assignee: Swarnim Kulkarni
>Priority: Minor
> Fix For: 3.0.0, 2.5.0
>
> Attachments: HADOOP-10541.1.patch.txt, HADOOP-10541.2.patch.txt, 
> HADOOP-10541.3.patch.txt, HADOOP-10541.4.patch
>
>
> The same InputStream variable is used for minikdc.ldiff and minikdc-krb5.conf 
> :
> {code}
> InputStream is = cl.getResourceAsStream("minikdc.ldiff");
> ...
> is = cl.getResourceAsStream("minikdc-krb5.conf");
> {code}
> Before the second assignment, is should be closed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10566) Refactor proxyservers out of ProxyUsers

2014-05-11 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HADOOP-10566:
--

Attachment: HADOOP-10566.patch

Attaching the patch after rebasing with trunk 

> Refactor proxyservers out of ProxyUsers
> ---
>
> Key: HADOOP-10566
> URL: https://issues.apache.org/jira/browse/HADOOP-10566
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 2.4.0
>Reporter: Benoy Antony
>Assignee: Benoy Antony
> Attachments: HADOOP-10566.patch, HADOOP-10566.patch, 
> HADOOP-10566.patch, HADOOP-10566.patch, HADOOP-10566.patch
>
>
> HADOOP-10498 added proxyservers feature in ProxyUsers. It is beneficial to 
> treat this as a separate feature since 
> 1> The ProxyUsers is per proxyuser where as proxyservers is per cluster. The 
> cardinality is different. 
> 2> The ProxyUsers.authorize() and ProxyUsers.isproxyUser() are synchronized 
> and hence share the same lock  and impacts performance.
> Since these are two separate features, it will be an improvement to keep them 
> separate. It also enables one to fine-tune each feature independently.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10564) Add username to native RPCv9 client

2014-05-11 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13992963#comment-13992963
 ] 

Colin Patrick McCabe commented on HADOOP-10564:
---

bq. Hi Colin, about user.h, we may need a struct to represent user(like ugi in 
hadoop), so in the future more things can be added to it, like auth method, 
tokens...

Yeah.  We probably want a ugi struct at some point.  I look at the stuff I'm 
adding here (a way to get a user name from the current user ID) as a building 
block for that.  One step at a time...


> Add username to native RPCv9 client
> ---
>
> Key: HADOOP-10564
> URL: https://issues.apache.org/jira/browse/HADOOP-10564
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: native
>Affects Versions: HADOOP-10388
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HADOOP-10564-pnative.002.patch, 
> HADOOP-10564-pnative.003.patch, HADOOP-10564.001.patch
>
>
> Add the ability for the native RPCv9 client to set a username when initiating 
> a connection.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10577) Fix some minors error and compile on macosx

2014-05-11 Thread Binglin Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13992519#comment-13992519
 ] 

Binglin Chang commented on HADOOP-10577:


Thanks for the review Luke! I have committed this.

> Fix some minors error and compile on macosx
> ---
>
> Key: HADOOP-10577
> URL: https://issues.apache.org/jira/browse/HADOOP-10577
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Binglin Chang
>Assignee: Binglin Chang
>Priority: Minor
> Attachments: HADOOP-10577.v1.patch, HADOOP-10577.v2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10501) Server#getHandlers() accesses handlers without synchronization

2014-05-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HADOOP-10501:


Component/s: ipc

> Server#getHandlers() accesses handlers without synchronization
> --
>
> Key: HADOOP-10501
> URL: https://issues.apache.org/jira/browse/HADOOP-10501
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Reporter: Ted Yu
>Priority: Minor
>
> {code}
>   Iterable getHandlers() {
> return Arrays.asList(handlers);
>   }
> {code}
> All the other methods accessing handlers are synchronized methods.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10404) Some accesses to DomainSocketWatcher#closed are not protected by lock

2014-05-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13994574#comment-13994574
 ] 

Ted Yu commented on HADOOP-10404:
-

[~cmccabe]:
Do you want to assign this to yourself ?

> Some accesses to DomainSocketWatcher#closed are not protected by lock
> -
>
> Key: HADOOP-10404
> URL: https://issues.apache.org/jira/browse/HADOOP-10404
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Tsuyoshi OZAWA
>Priority: Minor
> Attachments: HADOOP-10404.003.patch, HADOOP-10404.1.patch, 
> HADOOP-10404.2.patch
>
>
> {code}
>* Lock which protects toAdd, toRemove, and closed.
>*/
>   private final ReentrantLock lock = new ReentrantLock();
> {code}
> There're two places, NotificationHandler.handle() and kick(), where access to 
> closed is without holding lock.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10364) JsonGenerator in Configuration#dumpConfiguration() is not closed

2014-05-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HADOOP-10364:


Component/s: conf

> JsonGenerator in Configuration#dumpConfiguration() is not closed
> 
>
> Key: HADOOP-10364
> URL: https://issues.apache.org/jira/browse/HADOOP-10364
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Ted Yu
>
> {code}
> JsonGenerator dumpGenerator = dumpFactory.createJsonGenerator(out);
> {code}
> dumpGenerator is not closed in Configuration#dumpConfiguration()
> Looking at the source code of 
> org.codehaus.jackson.impl.WriterBasedGenerator#close(), there is more than 
> flushing the buffer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10581) TestUserGroupInformation#testGetServerSideGroups fails because groups stored in Set and ArrayList are compared

2014-05-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13994391#comment-13994391
 ] 

Hudson commented on HADOOP-10581:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1752 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1752/])
Correcting the check-in mistake for HADOOP-10581. (kihwal: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1593360)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUserGroupInformation.java
HADOOP-10581. TestUserGroupInformation#testGetServerSideGroups fails. 
Contributed by Mit Desai. (kihwal: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1593357)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUserGroupInformation.java


> TestUserGroupInformation#testGetServerSideGroups fails because groups stored 
> in Set and ArrayList are compared
> --
>
> Key: HADOOP-10581
> URL: https://issues.apache.org/jira/browse/HADOOP-10581
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.4.1
>Reporter: Mit Desai
>Assignee: Mit Desai
> Fix For: 2.5.0
>
> Attachments: HADOOP-10581.patch, HADOOP-10581.patch, 
> HADOOP-10581.patch
>
>
> The test fails on some machines that has variety of user groups.
> Initially the groups are extracted and stored in a set
> {{Set groups = new LinkedHashSet ();}}
> when the user groups are collected by calling the {{login.getGroupNames()}}, 
> they are stored in an array list
> {{String[] gi = login.getGroupNames();}}
> Because these groups are stored in different structure, there will be 
> inconsistency in the group count. Sets have unique list of keys while array 
> list emits everything they have.
> {{assertEquals(groups.size(), gi.length);}} fails when there are more than 
> one groups with same name as the count in sets will be less than the 
> arraylist.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10581) TestUserGroupInformation#testGetServerSideGroups fails because groups stored in Set and ArrayList are compared

2014-05-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13994346#comment-13994346
 ] 

Hudson commented on HADOOP-10581:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #560 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/560/])
Correcting the check-in mistake for HADOOP-10581. (kihwal: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1593360)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUserGroupInformation.java
HADOOP-10581. TestUserGroupInformation#testGetServerSideGroups fails. 
Contributed by Mit Desai. (kihwal: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1593357)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUserGroupInformation.java


> TestUserGroupInformation#testGetServerSideGroups fails because groups stored 
> in Set and ArrayList are compared
> --
>
> Key: HADOOP-10581
> URL: https://issues.apache.org/jira/browse/HADOOP-10581
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.4.1
>Reporter: Mit Desai
>Assignee: Mit Desai
> Fix For: 2.5.0
>
> Attachments: HADOOP-10581.patch, HADOOP-10581.patch, 
> HADOOP-10581.patch
>
>
> The test fails on some machines that has variety of user groups.
> Initially the groups are extracted and stored in a set
> {{Set groups = new LinkedHashSet ();}}
> when the user groups are collected by calling the {{login.getGroupNames()}}, 
> they are stored in an array list
> {{String[] gi = login.getGroupNames();}}
> Because these groups are stored in different structure, there will be 
> inconsistency in the group count. Sets have unique list of keys while array 
> list emits everything they have.
> {{assertEquals(groups.size(), gi.length);}} fails when there are more than 
> one groups with same name as the count in sets will be less than the 
> arraylist.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10583) bin/hadoop key throws NPE with no args and assorted other fixups

2014-05-11 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-10583:


Status: Patch Available  (was: Open)

> bin/hadoop key throws NPE with no args and assorted other fixups
> 
>
> Key: HADOOP-10583
> URL: https://issues.apache.org/jira/browse/HADOOP-10583
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bin
>Reporter: Charles Lamb
>Assignee: Charles Lamb
>Priority: Minor
>  Labels: patch
> Fix For: 3.0.0
>
> Attachments: HADOOP-10583.1.patch, HADOOP-10583.2.patch
>
>
> bin/hadoop key throws NPE.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10593) Concurrency Improvements

2014-05-11 Thread Benoy Antony (JIRA)
Benoy Antony created HADOOP-10593:
-

 Summary: Concurrency Improvements
 Key: HADOOP-10593
 URL: https://issues.apache.org/jira/browse/HADOOP-10593
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Benoy Antony
Assignee: Benoy Antony


This is an umbrella jira to improve the concurrency of a few classes by making 
use of safe publication idioms. Most of the improvements are based on the 
following:
{panel}
To publish an object safely, both the reference to the object and the object's 
state must be made visible to other threads at the same time. A properly 
constructed object can be safely published by:


* Initializing an object reference from a static initializer;
* Storing a reference to it into a volatile field or AtomicReference;
* Storing a reference to it into a final field of a properly constructed 
object; or
* Storing a reference to it into a field that is properly guarded by a lock.
{panel}




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10556) Add toLowerCase support to auth_to_local rules for service name

2014-05-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13994343#comment-13994343
 ] 

Hudson commented on HADOOP-10556:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #560 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/560/])
HADOOP-10556. [FIXING JIRA NUMBER TYPO] Add toLowerCase support to 
auth_to_local rules for service name. (tucu) (tucu: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1593107)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


> Add toLowerCase support to auth_to_local rules for service name
> ---
>
> Key: HADOOP-10556
> URL: https://issues.apache.org/jira/browse/HADOOP-10556
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.4.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 2.5.0
>
> Attachments: HADOOP-10556.patch, HADOOP-10556.patch
>
>
> When using Vintela to integrate Linux with AD, principals are lowercased. If 
> the accounts in AD have uppercase characters (ie FooBar) the Kerberos 
> principals have also uppercase characters (ie FooBar/). Because of 
> this, when a service (Yarn/HDFS) extracts the service name from the Kerberos 
> principal (FooBar) and uses it for obtain groups the user is not found 
> because via Linux the user FooBar is unknown, it has been converted to foobar.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10595) Improve concurrency in HostFileReader

2014-05-11 Thread Benoy Antony (JIRA)
Benoy Antony created HADOOP-10595:
-

 Summary: Improve concurrency in HostFileReader
 Key: HADOOP-10595
 URL: https://issues.apache.org/jira/browse/HADOOP-10595
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Benoy Antony
Assignee: Benoy Antony


HostsFileReader is used to keep track of included and excluded hosts in the 
cluster. The threads needs to synchronize to access the set of hosts.
This can be improved.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10510) TestSymlinkLocalFSFileContext tests are failing

2014-05-11 Thread Killua Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13994459#comment-13994459
 ] 

Killua Huang commented on HADOOP-10510:
---

I think this issue should be fixed. I got similarly failure when I ran tests. 
Here is mine:
{quote}
Failed tests: 
  TestSymlinkLocalFSFileContext>SymlinkBaseTest.testStatLinkToFile:244 null
  TestSymlinkLocalFSFileContext>SymlinkBaseTest.testStatLinkToDir:286 null
  
TestSymlinkLocalFSFileContext>SymlinkBaseTest.testCreateLinkUsingRelPaths:447->SymlinkBaseTest.checkLink:381
 null
  
TestSymlinkLocalFSFileContext>SymlinkBaseTest.testCreateLinkUsingAbsPaths:472->SymlinkBaseTest.checkLink:381
 null
  
TestSymlinkLocalFSFileContext>SymlinkBaseTest.testCreateLinkUsingFullyQualPaths:503->SymlinkBaseTest.checkLink:381
 null
  TestSymlinkLocalFSFileContext>SymlinkBaseTest.testCreateLinkToDirectory:627 
null
  TestSymlinkLocalFSFileContext>SymlinkBaseTest.testCreateLinkViaLink:679 null
  TestSymlinkLocalFSFileContext>SymlinkBaseTest.testRenameSymlinkViaSymlink:897 
null
  
TestSymlinkLocalFSFileContext>SymlinkBaseTest.testRenameSymlinkNonExistantDest:1036
 null
  
TestSymlinkLocalFSFileContext>SymlinkBaseTest.testRenameSymlinkToExistingFile:1063
 null
  TestSymlinkLocalFSFileContext>SymlinkBaseTest.testRenameSymlink:1134 null
  
TestSymlinkLocalFSFileContext>TestSymlinkLocalFS.testStatDanglingLink:115->SymlinkBaseTest.testStatDanglingLink:301
 null
  
TestSymlinkLocalFSFileSystem>TestSymlinkLocalFS.testStatDanglingLink:115->SymlinkBaseTest.testStatDanglingLink:301
 null
  TestSymlinkLocalFSFileSystem>SymlinkBaseTest.testStatLinkToFile:244 null
  TestSymlinkLocalFSFileSystem>SymlinkBaseTest.testStatLinkToDir:286 null
  
TestSymlinkLocalFSFileSystem>SymlinkBaseTest.testCreateLinkUsingRelPaths:447->SymlinkBaseTest.checkLink:381
 null
  
TestSymlinkLocalFSFileSystem>SymlinkBaseTest.testCreateLinkUsingAbsPaths:472->SymlinkBaseTest.checkLink:381
 null
  
TestSymlinkLocalFSFileSystem>SymlinkBaseTest.testCreateLinkUsingFullyQualPaths:503->SymlinkBaseTest.checkLink:381
 null
  TestSymlinkLocalFSFileSystem>SymlinkBaseTest.testCreateLinkToDirectory:627 
null
  TestSymlinkLocalFSFileSystem>SymlinkBaseTest.testCreateLinkViaLink:679 null
  TestSymlinkLocalFSFileSystem>SymlinkBaseTest.testRenameSymlinkViaSymlink:897 
null
  
TestSymlinkLocalFSFileSystem>SymlinkBaseTest.testRenameSymlinkNonExistantDest:1036
 null
  
TestSymlinkLocalFSFileSystem>SymlinkBaseTest.testRenameSymlinkToExistingFile:1063
 null
  TestSymlinkLocalFSFileSystem>SymlinkBaseTest.testRenameSymlink:1134 null

Tests in error: 
  TestSymlinkLocalFSFileContext>TestSymlinkLocalFS.testDanglingLink:163 » IO 
Pat...
  
TestSymlinkLocalFSFileContext>TestSymlinkLocalFS.testGetLinkStatusPartQualTarget:201
 » IO
  
TestSymlinkLocalFSFileContext>SymlinkBaseTest.testCreateLinkToDotDotPrefix:822 
» IO
  
TestSymlinkLocalFSFileSystem>TestSymlinkLocalFS.testGetLinkStatusPartQualTarget:201
 » IO
  TestSymlinkLocalFSFileSystem>TestSymlinkLocalFS.testDanglingLink:163 » IO 
Path...
  TestSymlinkLocalFSFileSystem>SymlinkBaseTest.testCreateLinkToDotDotPrefix:822 
» IO
{quote}

> TestSymlinkLocalFSFileContext tests are failing
> ---
>
> Key: HADOOP-10510
> URL: https://issues.apache.org/jira/browse/HADOOP-10510
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
> Environment: Linux
>Reporter: Daniel Darabos
> Attachments: TestSymlinkLocalFSFileContext-output.txt, 
> TestSymlinkLocalFSFileContext.txt
>
>
> Test results:
> https://gist.github.com/oza/9965197
> This was mentioned on hadoop-common-dev:
> http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201404.mbox/%3CCAAD07OKRSmx9VSjmfk1YxyBmnFM8mwZSp%3DizP8yKKwoXYvn3Qg%40mail.gmail.com%3E
> Can you suggest a workaround in the meantime? I'd like to send a pull request 
> for an unrelated bug, but these failures mean I cannot build hadoop-common to 
> test my fix. Thanks.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10590) ServiceAuthorizationManager is not threadsafe

2014-05-11 Thread Benoy Antony (JIRA)
Benoy Antony created HADOOP-10590:
-

 Summary: ServiceAuthorizationManager  is not threadsafe
 Key: HADOOP-10590
 URL: https://issues.apache.org/jira/browse/HADOOP-10590
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.4.0
Reporter: Benoy Antony
Assignee: Benoy Antony
 Attachments: HADOOP-10590.patch

The mutators in ServiceAuthorizationManager  are synchronized. The accessors 
are not synchronized.
This results in visibility issues when  ServiceAuthorizationManager's state is 
accessed from different threads.




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10158) SPNEGO should work with multiple interfaces/SPNs.

2014-05-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13994389#comment-13994389
 ] 

Hudson commented on HADOOP-10158:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1752 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1752/])
HADOOP-10158. SPNEGO should work with multiple interfaces/SPNs. Contributed by 
Daryn Sharp. (kihwal: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1593362)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/KerberosAuthenticationHandler.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestKerberosAuthenticationHandler.java
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/WebHDFS.apt.vm


> SPNEGO should work with multiple interfaces/SPNs.
> -
>
> Key: HADOOP-10158
> URL: https://issues.apache.org/jira/browse/HADOOP-10158
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Kihwal Lee
>Assignee: Daryn Sharp
>Priority: Critical
> Fix For: 2.5.0
>
> Attachments: HADOOP-10158-readkeytab.patch, 
> HADOOP-10158-readkeytab.patch, HADOOP-10158.patch, HADOOP-10158.patch, 
> HADOOP-10158.patch, HADOOP-10158_multiplerealms.patch, 
> HADOOP-10158_multiplerealms.patch, HADOOP-10158_multiplerealms.patch
>
>
> This is the list of internal servlets added by namenode.
> | Name | Auth | Need to be accessible by end users |
> | StartupProgressServlet | none | no |
> | GetDelegationTokenServlet | internal SPNEGO | yes |
> | RenewDelegationTokenServlet | internal SPNEGO | yes |
> |  CancelDelegationTokenServlet | internal SPNEGO | yes |
> |  FsckServlet | internal SPNEGO | yes |
> |  GetImageServlet | internal SPNEGO | no |
> |  ListPathsServlet | token in query | yes |
> |  FileDataServlet | token in query | yes |
> |  FileChecksumServlets | token in query | yes |
> | ContentSummaryServlet | token in query | yes |
> GetDelegationTokenServlet, RenewDelegationTokenServlet, 
> CancelDelegationTokenServlet and FsckServlet are accessed by end users, but 
> hard-coded to use the internal SPNEGO filter.
> If a name node HTTP server binds to multiple external IP addresses, the 
> internal SPNEGO service principal name may not work with an address to which 
> end users are connecting.  The current SPNEGO implementation in Hadoop is 
> limited to use a single service principal per filter.
> If the underlying hadoop kerberos authentication handler cannot easily be 
> modified, we can at least create a separate auth filter for the end-user 
> facing servlets so that their service principals can be independently 
> configured. If not defined, it should fall back to the current behavior.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10566) Refactor proxyservers out of ProxyUsers

2014-05-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13994339#comment-13994339
 ] 

Hudson commented on HADOOP-10566:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #560 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/560/])
HADOOP-10566. Add toLowerCase support to auth_to_local rules for service name. 
(tucu) (tucu: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1593105)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestKerberosName.java
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/SecureMode.apt.vm


> Refactor proxyservers out of ProxyUsers
> ---
>
> Key: HADOOP-10566
> URL: https://issues.apache.org/jira/browse/HADOOP-10566
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 2.4.0
>Reporter: Benoy Antony
>Assignee: Benoy Antony
> Attachments: HADOOP-10566.patch, HADOOP-10566.patch, 
> HADOOP-10566.patch, HADOOP-10566.patch, HADOOP-10566.patch
>
>
> HADOOP-10498 added proxyservers feature in ProxyUsers. It is beneficial to 
> treat this as a separate feature since 
> 1> The ProxyUsers is per proxyuser where as proxyservers is per cluster. The 
> cardinality is different. 
> 2> The ProxyUsers.authorize() and ProxyUsers.isproxyUser() are synchronized 
> and hence share the same lock  and impacts performance.
> Since these are two separate features, it will be an improvement to keep them 
> separate. It also enables one to fine-tune each feature independently.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10467) Enable proxyuser specification to support list of users in addition to list of groups.

2014-05-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13994338#comment-13994338
 ] 

Hudson commented on HADOOP-10467:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #560 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/560/])
HADOOP-10467. Enable proxyuser specification to support list of users in 
addition to list of groups. (Contributed bt Benoy Antony) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1593162)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/ProxyUsers.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/SecureMode.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/authorize/TestProxyUsers.java


> Enable proxyuser specification to support list of users in addition to list 
> of groups.
> --
>
> Key: HADOOP-10467
> URL: https://issues.apache.org/jira/browse/HADOOP-10467
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Reporter: Benoy Antony
>Assignee: Benoy Antony
> Fix For: 3.0.0, 2.5.0
>
> Attachments: HADOOP-10467.patch, HADOOP-10467.patch, 
> HADOOP-10467.patch, HADOOP-10467.patch, HADOOP-10467.patch, 
> HADOOP-10467.patch, HADOOP-10467.patch, HADOOP-10467.patch, HADOOP-10467.patch
>
>
> Today , the proxy user specification supports only list of groups. In some 
> cases, it is useful to specify the list of users in addition to list of 
> groups. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)