[jira] [Commented] (HADOOP-10641) Introduce Coordination Engine

2014-06-10 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14027444#comment-14027444
 ] 

Aaron T. Myers commented on HADOOP-10641:
-

bq. I just want to make sure we are on the same page here. The intent of this 
jira is not to solve the general problem of distributed consensus. That is, I 
do not propose to build an implementation of paxos or other coordination 
algorithms here. This is only to introduce a common interface, so that real 
implementations such as ZooKeeper could be plugged into hadoop projects.

Totally get that, but I think the point still remains that there's little 
expertise for defining a common interface for coordination engines in general 
in this project, and no real reason that the Hadoop project should necessarily 
be the place where that interface is defined. The ZooKeeper project, a ZK 
sub-project, or an entirely new TLP makes more sense to me.

> Introduce Coordination Engine
> -
>
> Key: HADOOP-10641
> URL: https://issues.apache.org/jira/browse/HADOOP-10641
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Konstantin Shvachko
>Assignee: Plamen Jeliazkov
> Attachments: HADOOP-10641.patch, HADOOP-10641.patch, 
> HADOOP-10641.patch
>
>
> Coordination Engine (CE) is a system, which allows to agree on a sequence of 
> events in a distributed system. In order to be reliable CE should be 
> distributed by itself.
> Coordination Engine can be based on different algorithms (paxos, raft, 2PC, 
> zab) and have different implementations, depending on use cases, reliability, 
> availability, and performance requirements.
> CE should have a common API, so that it could serve as a pluggable component 
> in different projects. The immediate beneficiaries are HDFS (HDFS-6469) and 
> HBase (HBASE-10909).
> First implementation is proposed to be based on ZooKeeper.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10674) Rewrite the PureJavaCrc32 loop for performance improvement

2014-06-10 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-10674:
-

Attachment: c10674_20140610.patch

c10674_20140610.patch:
- performance improvement from 45% to 60%;
- using java.util.zip.CRC32 for Java 7 or above.

> Rewrite the PureJavaCrc32 loop for performance improvement
> --
>
> Key: HADOOP-10674
> URL: https://issues.apache.org/jira/browse/HADOOP-10674
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance, util
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: c10674_20140609.patch, c10674_20140609b.patch, 
> c10674_20140610.patch
>
>
> Below are some performance improvement opportunities performance improvement 
> in PureJavaCrc32.
> - eliminate "off += 8; len -= 8;"
> - replace T8_x_start with hard coded constants
> - eliminate c0 - c7 local variables
> In my machine, there are 30% to 50% improvement for most of the cases.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10674) Rewrite the PureJavaCrc32 loop for performance improvement

2014-06-10 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14027352#comment-14027352
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-10674:
--

A little more improvement.  

java.version = 1.6.0_65
java.runtime.name = Java(TM) SE Runtime Environment
java.runtime.version = 1.6.0_65-b14-462-11M4609
java.vm.version = 20.65-b04-462
java.vm.vendor = Apple Inc.
java.vm.name = Java HotSpot(TM) 64-Bit Server VM
java.vm.specification.version = 1.0
java.specification.version = 1.6
os.arch = x86_64
os.name = Mac OS X
os.version = 10.9.3

Performance Table (The unit is MB/sec)
|| Num Bytes ||CRC32 || PureJavaCrc32 | % diff || PureJavaCrc32new | % diff 
| % diff ||
|  1 |17.368 |174.187 | 902.9% |   173.268 | 897.6% 
|  -0.5% |
|  2 |34.361 |281.842 | 720.2% |   275.534 | 701.9% 
|  -2.2% |
|  4 |65.416 |329.511 | 403.7% |   324.046 | 395.4% 
|  -1.7% |
|  8 |   111.836 |624.884 | 458.7% |   674.412 | 503.0% 
|   7.9% |
| 16 |   177.960 |767.225 | 331.1% |   954.177 | 436.2% 
|  24.4% |
| 32 |   243.528 |926.455 | 280.4% |  1170.222 | 380.5% 
|  26.3% |
| 64 |   309.750 |   1039.408 | 235.6% |  1453.092 | 369.1% 
|  39.8% |
|128 |   359.060 |   1106.300 | 208.1% |  1555.267 | 333.1% 
|  40.6% |
|256 |   384.203 |   1128.191 | 193.6% |  1619.925 | 321.6% 
|  43.6% |
|512 |   401.706 |   1108.321 | 175.9% |  1683.524 | 319.1% 
|  51.9% |
|   1024 |   409.730 |   1191.740 | 190.9% |  1755.902 | 328.6% 
|  47.3% |
|   2048 |   410.262 |   1175.336 | 186.5% |  1786.138 | 335.4% 
|  52.0% |
|   4096 |   417.109 |   1145.619 | 174.7% |  1768.909 | 324.1% 
|  54.4% |
|   8192 |   409.864 |   1138.061 | 177.7% |  1810.518 | 341.7% 
|  59.1% |
|  16384 |   411.105 |   1072.341 | 160.8% |  1750.499 | 325.8% 
|  63.2% |
|  32768 |   418.411 |   1176.763 | 181.2% |  1790.886 | 328.0% 
|  52.2% |
|  65536 |   413.055 |   1143.868 | 176.9% |  1792.416 | 333.9% 
|  56.7% |
| 131072 |   418.510 |   1053.030 | 151.6% |  1790.235 | 327.8% 
|  70.0% |
| 262144 |   412.248 |   1185.558 | 187.6% |  1800.560 | 336.8% 
|  51.9% |
| 524288 |   417.332 |   1190.188 | 185.2% |  1812.133 | 334.2% 
|  52.3% |
|1048576 |   414.104 |   1119.253 | 170.3% |  1755.396 | 323.9% 
|  56.8% |
|2097152 |   419.225 |   1187.693 | 183.3% |  1847.922 | 340.8% 
|  55.6% |
|4194304 |   418.692 |   1171.539 | 179.8% |  1787.660 | 327.0% 
|  52.6% |
|8388608 |   412.950 |   1159.336 | 180.7% |  1688.320 | 308.8% 
|  45.6% |
|   16777216 |   416.055 |   1199.445 | 188.3% |  1727.302 | 315.2% 
|  44.0% |


> Rewrite the PureJavaCrc32 loop for performance improvement
> --
>
> Key: HADOOP-10674
> URL: https://issues.apache.org/jira/browse/HADOOP-10674
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance, util
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: c10674_20140609.patch, c10674_20140609b.patch
>
>
> Below are some performance improvement opportunities performance improvement 
> in PureJavaCrc32.
> - eliminate "off += 8; len -= 8;"
> - replace T8_x_start with hard coded constants
> - eliminate c0 - c7 local variables
> In my machine, there are 30% to 50% improvement for most of the cases.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10389) Native RPCv9 client

2014-06-10 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14027346#comment-14027346
 ] 

Haohui Mai commented on HADOOP-10389:
-

{quote}
Currently, the libraries we depend on are: libuv, for portability primitives, 
protobuf-c, for protobuf functionality, expat, for XML parsing, and 
liburiparser, for parsing URIs. None of that functionality is provided by the 
C++ standard library, so your statement is false.

A lot of this code is not new. For example, we were using tree.h (which 
implements splay trees and rb trees), previously in libhdfs. The maintenance 
burden was not high. In fact, it was zero, because we never had to fix a bug in 
tree.h. So once again, your statement is just false.

bq. htable.c got a review because it is new code. I would hardly call reviewing 
new code a "maintenance burden." And anyway, there is a standard C way to use 
hash tables... the hcreate_r, hsearch_r, and hdestroy functions. We would like 
to use the standard way, but Windows doesn't implement these functions.
{quote}

I fail to understand what point you're try to make. My point is that you can 
write much less code in a modern language with better standard libraries, which 
makes things much easier to review and maintain. For example, when you're 
working on trunk, how many times you have to put up a 200kb patch like this 
jira? How many big patches in this feature branch? Please be considerate of the 
reviewers of the patch.

{quote}
Firstly, the challenge of maintaining a consistent C++ coding style is very, 
very large. ...
For example, exceptions harm performance...
C++ library APIs have binary compatibility issues
{quote}

Arguably you can implement what you want in C++ and C equally well. Coding 
styles and performance can be a problem.

However, before any of them I'm much more concerned about the correctness of 
the current code. For example, I'm seeing the code allocates {{hadoop_err}} on 
the common paths, and it has to clean it up on all error paths. I'm also seeing 
many calls to {{strcpy()}}, as well as calls to {{*printf()}} with non constant 
format strings.

My question is that (1) whether the code contains no memory leak, no buffer 
overflow, and no format string overflow? (2) whether the code always passes the 
function pointer with the correct type? I'm perfectly happy to +1 your patches 
as long as you can show your code is indeed free of these common defects.

Given the amount of code in the branch, it might be an issue worth looking at 
some point, compared to when a merge vote is called.



> Native RPCv9 client
> ---
>
> Key: HADOOP-10389
> URL: https://issues.apache.org/jira/browse/HADOOP-10389
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: HADOOP-10388
>Reporter: Binglin Chang
>Assignee: Colin Patrick McCabe
> Attachments: HADOOP-10388.001.patch, HADOOP-10389.002.patch, 
> HADOOP-10389.004.patch, HADOOP-10389.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10640) Implement Namenode RPCs in HDFS native client

2014-06-10 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-10640:
--

Attachment: HADOOP-10640-pnative.004.patch

added the comments I discussed above

> Implement Namenode RPCs in HDFS native client
> -
>
> Key: HADOOP-10640
> URL: https://issues.apache.org/jira/browse/HADOOP-10640
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: native
>Affects Versions: HADOOP-10388
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HADOOP-10640-pnative.001.patch, 
> HADOOP-10640-pnative.002.patch, HADOOP-10640-pnative.003.patch, 
> HADOOP-10640-pnative.004.patch
>
>
> Implement the parts of libhdfs that just involve making RPCs to the Namenode, 
> such as mkdir, rename, etc.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10389) Native RPCv9 client

2014-06-10 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14027288#comment-14027288
 ] 

Colin Patrick McCabe commented on HADOOP-10389:
---

bq. What make me concerned is that the code has to bring in a lot more 
dependency in plain C, which has a high cost on maintenance

Currently, the libraries we depend on are: {{libuv}}, for portability 
primitives, {{protobuf-c}}, for protobuf functionality, {{expat}}, for XML 
parsing, and {{liburiparser}}, for parsing URIs.  None of that functionality is 
provided by the C++ standard library, so your statement is false.

bq. For example, this patch at least contains implementation of linked list, 
splay tress, hash tables, and rb trees. There are a lot of overheads on 
implementing, reviewing and testing the code.

A lot of this code is not new.  For example, we were using {{tree.h}} (which 
implements splay trees and rb trees), previously in libhdfs.  The maintenance 
burden was not high.  In fact, it was zero, because we never had to fix a bug 
in {{tree.h}}.  So once again, your statement is just false.

{{htable.c}} got a review because it is new code.  I would hardly call 
reviewing new code a "maintenance burden."  And anyway, there is a standard C 
way to use hash tables... the {{hcreate_r}}, {{hsearch_r}}, and {{hdestroy}} 
functions.  We would like to use the standard way, but Windows doesn't 
implement these functions.

bq. For example, do you considering supporting filenames in unicode? That way I 
think libicu might need to be brought into the picture.

First of all, the question of whether we should use libicu is independent of 
the question of whether we should use C\+\+.  libicu has a C interface, and the 
standard C\+\+ libraries and runtime don't provide any unicode functionality 
beyond what the standard C libraries provide.

Second of all, I see no reason to use libicu.  All the strings we are dealing 
with are UTF-8 supplied to and from protobuf.  This means that they are 
null-terminated and can be printed and handled with existing string functions.  
libicu might come into the picture if we wanted to start normalizing unicode 
strings or using wide character strings.  But we don't need or want to do that.

bq. It looks to me that it is much more compelling to implement the code in a 
more modern language, say, c++11, where much of the headache right now is taken 
away by a mature standard library.

C++ first came on the scene in 1983.  That is 31 years ago.  C++ may be a lot 
of things, but "modern" isn't one of them.  I was a C++ programmer for 10 
years.  I know the language about as well as anyone can.  I specifically chose 
C for this project because of a few things.

Firstly, the challenge of maintaining a consistent C++ coding style is very, 
very large.  This is true even when everyone is a professional C++ programmer 
working under the same roof.  For a project like Hadoop, where C/C++ is not 
everyone's first language, the challenge is just unsupportable.  The C++ 
learning curve is just much higher than C.  You have to know everything you 
have to know for C, plus a lot of very tricky things that are unique to C++.

There are a lot of contentious issues in the community like use exceptions, or 
don't use exceptions?  Use global constructors, or don't use global 
constructors?  Use boost, or don't use boost?  Use C++0x / C++11 / C++14 or use 
some older standard?  Use runtime type information ({{dynamic_cast}}, 
{{typeof}}), or don't use runtime type information?  Operator overloading, or 
no operator overloading?

There are reasonable arguments for each of these positions.  For example, 
exceptions harm performance (because of the need to maintain data to do stack 
unwinding.  See here: 
http://preshing.com/20110807/the-cost-of-enabling-exception-handling/.  That's 
just if you don't use them... if you do use them, exceptions turn out to be a 
lot slower than return codes.  They also can make code difficult to follow.  
C++ doesn't have checked exceptions, so you can never really know what any 
function will throw.  For this reason, some fairly smart people at Google have 
decided to ban exceptions from their coding standard.  This, in turn, means 
that it's difficult for libraries to throw exceptions, since open source 
projects using the Google Coding standard (and there are a lot of them) can't 
deal with exceptions.  Of course, without exceptions, certain things in C++ are 
very hard to do.  (By the way, I'm not interested in having the argument 
for/against exceptions here, just in noting that there is huge fragmentation 
here and reasonable people on both sides.)

A similar story could be told about all the other choices.  The net effect is 
that we have to police a very large set of arbitrary style decisions that just 
wouldn't come up at all if we just used C.

C\+\+ library APIs have binary com

[jira] [Commented] (HADOOP-10656) The password keystore file is not picked by LDAP group mapping

2014-06-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14027275#comment-14027275
 ] 

Hadoop QA commented on HADOOP-10656:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12649705/HADOOP-10656.002.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4043//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4043//console

This message is automatically generated.

> The password keystore file is not picked by LDAP group mapping
> --
>
> Key: HADOOP-10656
> URL: https://issues.apache.org/jira/browse/HADOOP-10656
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HADOOP-10656.002.patch, HADOOP-10656.patch
>
>
> The user configured password file(LDAP_KEYSTORE_PASSWORD_FILE_KEY) will not 
> be picked by LdapGroupsMapping:
> In setConf():
> {noformat}
> keystorePass =
> conf.get(LDAP_KEYSTORE_PASSWORD_KEY, LDAP_KEYSTORE_PASSWORD_DEFAULT);
> if (keystorePass.isEmpty()) {
>   keystorePass = extractPassword(
> conf.get(LDAP_KEYSTORE_PASSWORD_KEY, 
> LDAP_KEYSTORE_PASSWORD_DEFAULT)); 
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10376) Refactor refresh*Protocols into a single generic refreshConfigProtocol

2014-06-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14027271#comment-14027271
 ] 

Hadoop QA commented on HADOOP-10376:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12649678/HADOOP-10376.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4041//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4041//console

This message is automatically generated.

> Refactor refresh*Protocols into a single generic refreshConfigProtocol
> --
>
> Key: HADOOP-10376
> URL: https://issues.apache.org/jira/browse/HADOOP-10376
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Chris Li
>Assignee: Chris Li
>Priority: Minor
> Attachments: HADOOP-10376.patch, HADOOP-10376.patch, 
> HADOOP-10376.patch, HADOOP-10376.patch, RefreshFrameworkProposal.pdf
>
>
> See https://issues.apache.org/jira/browse/HADOOP-10285
> There are starting to be too many refresh*Protocols We can refactor them to 
> use a single protocol with a variable payload to choose what to do.
> Thereafter, we can return an indication of success or failure.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10656) The password keystore file is not picked by LDAP group mapping

2014-06-10 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-10656:
---

Hadoop Flags: Reviewed

{code}
} finally {
  if (reader != null)
try {
  reader.close();
} catch (IOException e) {
  LOG.warn("Could not close password file: " + pwFile, e);
}
}
{code}

Minor nit-pick: In the above, can you please add curly braces after the if 
statement, just to clarify the nesting?  Alternatively, you could replace that 
code segment with {{IOUtils#cleanup}}.  +1 for the patch after that small 
change.  Thank you again, Brandon!

> The password keystore file is not picked by LDAP group mapping
> --
>
> Key: HADOOP-10656
> URL: https://issues.apache.org/jira/browse/HADOOP-10656
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HADOOP-10656.002.patch, HADOOP-10656.patch
>
>
> The user configured password file(LDAP_KEYSTORE_PASSWORD_FILE_KEY) will not 
> be picked by LdapGroupsMapping:
> In setConf():
> {noformat}
> keystorePass =
> conf.get(LDAP_KEYSTORE_PASSWORD_KEY, LDAP_KEYSTORE_PASSWORD_DEFAULT);
> if (keystorePass.isEmpty()) {
>   keystorePass = extractPassword(
> conf.get(LDAP_KEYSTORE_PASSWORD_KEY, 
> LDAP_KEYSTORE_PASSWORD_DEFAULT)); 
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-6350) Documenting Hadoop metrics

2014-06-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14027235#comment-14027235
 ] 

Hadoop QA commented on HADOOP-6350:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12649691/HADOOP-6350.8.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4042//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4042//console

This message is automatically generated.

> Documenting Hadoop metrics
> --
>
> Key: HADOOP-6350
> URL: https://issues.apache.org/jira/browse/HADOOP-6350
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, metrics
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Hong Tang
>Assignee: Akira AJISAKA
>  Labels: metrics
> Attachments: HADOOP-6350-sample-1.patch, HADOOP-6350-sample-2.patch, 
> HADOOP-6350-sample-3.patch, HADOOP-6350.4.patch, HADOOP-6350.5.patch, 
> HADOOP-6350.6.patch, HADOOP-6350.7.patch, HADOOP-6350.8.patch, sample1.png
>
>
> Metrics should be part of public API, and should be clearly documented 
> similar to HADOOP-5073, so that we can reliably build tools on top of them.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10656) The password keystore file is not picked by LDAP group mapping

2014-06-10 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14027234#comment-14027234
 ] 

Brandon Li commented on HADOOP-10656:
-

Thanks for the review, [~cnauroth]. Let's use this JIRA to fix both issues. 
I've uploaded a new patch. 

> The password keystore file is not picked by LDAP group mapping
> --
>
> Key: HADOOP-10656
> URL: https://issues.apache.org/jira/browse/HADOOP-10656
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HADOOP-10656.002.patch, HADOOP-10656.patch
>
>
> The user configured password file(LDAP_KEYSTORE_PASSWORD_FILE_KEY) will not 
> be picked by LdapGroupsMapping:
> In setConf():
> {noformat}
> keystorePass =
> conf.get(LDAP_KEYSTORE_PASSWORD_KEY, LDAP_KEYSTORE_PASSWORD_DEFAULT);
> if (keystorePass.isEmpty()) {
>   keystorePass = extractPassword(
> conf.get(LDAP_KEYSTORE_PASSWORD_KEY, 
> LDAP_KEYSTORE_PASSWORD_DEFAULT)); 
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10656) The password keystore file is not picked by LDAP group mapping

2014-06-10 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HADOOP-10656:


Attachment: HADOOP-10656.002.patch

> The password keystore file is not picked by LDAP group mapping
> --
>
> Key: HADOOP-10656
> URL: https://issues.apache.org/jira/browse/HADOOP-10656
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HADOOP-10656.002.patch, HADOOP-10656.patch
>
>
> The user configured password file(LDAP_KEYSTORE_PASSWORD_FILE_KEY) will not 
> be picked by LdapGroupsMapping:
> In setConf():
> {noformat}
> keystorePass =
> conf.get(LDAP_KEYSTORE_PASSWORD_KEY, LDAP_KEYSTORE_PASSWORD_DEFAULT);
> if (keystorePass.isEmpty()) {
>   keystorePass = extractPassword(
> conf.get(LDAP_KEYSTORE_PASSWORD_KEY, 
> LDAP_KEYSTORE_PASSWORD_DEFAULT)); 
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-6350) Documenting Hadoop metrics

2014-06-10 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-6350:
--

Attachment: HADOOP-6350.8.patch

Updated the patch to fix a typo and remove trailing whitespaces.

> Documenting Hadoop metrics
> --
>
> Key: HADOOP-6350
> URL: https://issues.apache.org/jira/browse/HADOOP-6350
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, metrics
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Hong Tang
>Assignee: Akira AJISAKA
>  Labels: metrics
> Attachments: HADOOP-6350-sample-1.patch, HADOOP-6350-sample-2.patch, 
> HADOOP-6350-sample-3.patch, HADOOP-6350.4.patch, HADOOP-6350.5.patch, 
> HADOOP-6350.6.patch, HADOOP-6350.7.patch, HADOOP-6350.8.patch, sample1.png
>
>
> Metrics should be part of public API, and should be clearly documented 
> similar to HADOOP-5073, so that we can reliably build tools on top of them.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-6350) Documenting Hadoop metrics

2014-06-10 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14027177#comment-14027177
 ] 

Akira AJISAKA commented on HADOOP-6350:
---

bq. These are not actually metrics. These are not collected or sinked by 
MetricsSystem, so users cannot get them by file or ganglia. Users can get these 
information only by jmx/jconsole.
However, we should document these information also. I'll create a separate jira 
for tracking this.

bq. As a separate discussion I think long term maintenance of this 
documentation will be challenging. 
I agree with you. If this document includes the information registered to 
MBeans (which can be accessed via jmx or jconsole), the maintenance will get 
more challenging. 


> Documenting Hadoop metrics
> --
>
> Key: HADOOP-6350
> URL: https://issues.apache.org/jira/browse/HADOOP-6350
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, metrics
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Hong Tang
>Assignee: Akira AJISAKA
>  Labels: metrics
> Attachments: HADOOP-6350-sample-1.patch, HADOOP-6350-sample-2.patch, 
> HADOOP-6350-sample-3.patch, HADOOP-6350.4.patch, HADOOP-6350.5.patch, 
> HADOOP-6350.6.patch, HADOOP-6350.7.patch, sample1.png
>
>
> Metrics should be part of public API, and should be clearly documented 
> similar to HADOOP-5073, so that we can reliably build tools on top of them.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10656) The password keystore file is not picked by LDAP group mapping

2014-06-10 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14027175#comment-14027175
 ] 

Chris Nauroth commented on HADOOP-10656:


Hi, [~brandonli].  Nice catch, and thank you for fixing it.

This is not directly related to your patch, but I noticed that the 
{{LdapGroupsMapping#extractPassword}} method is susceptible to a file 
descriptor leak.  If one of the {{Reader#read}} calls throws an 
{{IOException}}, then we won't close the {{Reader}}.  Do you think we could fix 
this while we're in this class?  I think we'd just need to move the 
{{Reader#close}} call into a finally block.

> The password keystore file is not picked by LDAP group mapping
> --
>
> Key: HADOOP-10656
> URL: https://issues.apache.org/jira/browse/HADOOP-10656
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HADOOP-10656.patch
>
>
> The user configured password file(LDAP_KEYSTORE_PASSWORD_FILE_KEY) will not 
> be picked by LdapGroupsMapping:
> In setConf():
> {noformat}
> keystorePass =
> conf.get(LDAP_KEYSTORE_PASSWORD_KEY, LDAP_KEYSTORE_PASSWORD_DEFAULT);
> if (keystorePass.isEmpty()) {
>   keystorePass = extractPassword(
> conf.get(LDAP_KEYSTORE_PASSWORD_KEY, 
> LDAP_KEYSTORE_PASSWORD_DEFAULT)); 
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10679) Authorize webui access using ServiceAuthorizationManager

2014-06-10 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HADOOP-10679:
--

Description: 
Currently accessing Hadoop via RPC can be authorized using 
_ServiceAuthorizationManager_. But there is no uniform authorization of the 
HTTP access. Some of the servlets check for admin privilege. 
This creates an inconsistency of authorization between access via RPC vs HTTP. 

The fix is to enable authorization of the webui access also using 
_ServiceAuthorizationManager_. 



  was:
Currently accessing Hadoop via RPC can be authorized using 
_ServiceAuthorizationManager_. But there is no uniform authorization of the 
HTTP access. Some of the servlets check for admin privilege. 
This creates an inconsistency of authorization between access via RPC vs HTTP. 

The fix is to enable authorization of the webui access using 
_ServiceAuthorizationManager_. 




> Authorize webui access using ServiceAuthorizationManager
> 
>
> Key: HADOOP-10679
> URL: https://issues.apache.org/jira/browse/HADOOP-10679
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Reporter: Benoy Antony
>Assignee: Benoy Antony
>
> Currently accessing Hadoop via RPC can be authorized using 
> _ServiceAuthorizationManager_. But there is no uniform authorization of the 
> HTTP access. Some of the servlets check for admin privilege. 
> This creates an inconsistency of authorization between access via RPC vs 
> HTTP. 
> The fix is to enable authorization of the webui access also using 
> _ServiceAuthorizationManager_. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-6350) Documenting Hadoop metrics

2014-06-10 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14027170#comment-14027170
 ] 

Akira AJISAKA commented on HADOOP-6350:
---

Thanks [~arpitagarwal] for the review!
bq. I am not sure what this means - Each metrics record contains tags such as 
ProcessName, SessionId, and Hostname as additional information along with 
metrics.. How are these tags accessed, I don't see them in jconsole? Perhaps I 
am missing some basic knowledge, let me know if so.
I can see them from jmx as follows:
{code}
 "name" : "Hadoop:service=NameNode,name=FSNamesystem",
"modelerType" : "FSNamesystem",
"tag.Context" : "dfs",
"tag.HAState" : "active",
"tag.Hostname" : "trunk",
"MissingBlocks" : 0,
"ExpiredHeartbeats" : 0,
  ...
{code}
Metrics records contain tags for grouping on host/queue/username etc.

{quote}
Namenode - snapshot metrics are missing.
DataNode - DataNodeInfo metrics are missing.
DataNode - FsDatasetState metrics are missing.
{quote}
These are not actually metrics. These are not collected or sinked by 
{{MetricsSystem}}, so users cannot get them by file or ganglia. Users can get 
these information only by jmx/jconsole.

bq. Nitpick: we should use title case consistently for sub-headings e.g. 
rpcdetail --> RpcDetailed
The title shows the name of the Metrics, so it can be case inconsistent if the 
name is inconsistent. For example, the name "namenode" is set by the following 
code (NameNodeMetrics.java) :
{code}
  final MetricsRegistry registry = new MetricsRegistry("namenode");
{code}

> Documenting Hadoop metrics
> --
>
> Key: HADOOP-6350
> URL: https://issues.apache.org/jira/browse/HADOOP-6350
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, metrics
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Hong Tang
>Assignee: Akira AJISAKA
>  Labels: metrics
> Attachments: HADOOP-6350-sample-1.patch, HADOOP-6350-sample-2.patch, 
> HADOOP-6350-sample-3.patch, HADOOP-6350.4.patch, HADOOP-6350.5.patch, 
> HADOOP-6350.6.patch, HADOOP-6350.7.patch, sample1.png
>
>
> Metrics should be part of public API, and should be clearly documented 
> similar to HADOOP-5073, so that we can reliably build tools on top of them.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10668) TestZKFailoverControllerStress#testExpireBackAndForth occasionally fails

2014-06-10 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14027165#comment-14027165
 ] 

Chris Nauroth commented on HADOOP-10668:


It looks like upgrading ZooKeeper didn't fix this.  Here is a Jenkins build 
after HADOOP-9555 was committed, showing a failure in 
{{TestZKFailoverController#testAutoFailoverOnLostZKSession}}.

https://builds.apache.org/job/PreCommit-HADOOP-Build/4039/

Let's keep HADOOP-10668 open.

> TestZKFailoverControllerStress#testExpireBackAndForth occasionally fails
> 
>
> Key: HADOOP-10668
> URL: https://issues.apache.org/jira/browse/HADOOP-10668
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Ted Yu
>Priority: Minor
>  Labels: test
>
> From 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/4018//testReport/org.apache.hadoop.ha/TestZKFailoverControllerStress/testExpireBackAndForth/
>  :
> {code}
> org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode
>   at org.apache.zookeeper.server.DataTree.getData(DataTree.java:648)
>   at org.apache.zookeeper.server.ZKDatabase.getData(ZKDatabase.java:371)
>   at 
> org.apache.hadoop.ha.MiniZKFCCluster.expireActiveLockHolder(MiniZKFCCluster.java:199)
>   at 
> org.apache.hadoop.ha.MiniZKFCCluster.expireAndVerifyFailover(MiniZKFCCluster.java:234)
>   at 
> org.apache.hadoop.ha.TestZKFailoverControllerStress.testExpireBackAndForth(TestZKFailoverControllerStress.java:84)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9629) Support Windows Azure Storage - Blob as a file system in Hadoop

2014-06-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14027161#comment-14027161
 ] 

Hudson commented on HADOOP-9629:


SUCCESS: Integrated in Hadoop-trunk-Commit #5679 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5679/])
HADOOP-9629. Support Windows Azure Storage - Blob as a file system in Hadoop. 
Contributed by Dexter Bradshaw, Mostafa Elhemali, Xi Fang, Johannes Klein, 
David Lao, Mike Liddell, Chuan Liu, Lengning Liu, Ivan Mitic, Michael Rys, 
Alexander Stojanovic, Brian Swan, and Min Wei. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1601781)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-project/pom.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-azure
* /hadoop/common/trunk/hadoop-tools/hadoop-azure/.gitignore
* /hadoop/common/trunk/hadoop-tools/hadoop-azure/README.txt
* /hadoop/common/trunk/hadoop-tools/hadoop-azure/dev-support
* 
/hadoop/common/trunk/hadoop-tools/hadoop-azure/dev-support/findbugs-exclude.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-azure/pom.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-azure/src
* /hadoop/common/trunk/hadoop-tools/hadoop-azure/src/config
* /hadoop/common/trunk/hadoop-tools/hadoop-azure/src/config/checkstyle.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-azure/src/main
* /hadoop/common/trunk/hadoop-tools/hadoop-azure/src/main/java
* /hadoop/common/trunk/hadoop-tools/hadoop-azure/src/main/java/org
* /hadoop/common/trunk/hadoop-tools/hadoop-azure/src/main/java/org/apache
* /hadoop/common/trunk/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop
* 
/hadoop/common/trunk/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs
* 
/hadoop/common/trunk/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure
* 
/hadoop/common/trunk/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureException.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/BlobMaterialization.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/FileMetadata.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/KeyProvider.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/KeyProviderException.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeFileSystemStore.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/PartialListing.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/SelfThrottlingIntercept.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/SendRequestIntercept.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/ShellDecryptionKeyProvider.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/SimpleKeyProvider.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/StorageInterface.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/StorageInterfaceImpl.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/Wasb.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/WasbFsck.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/package.html
* /hadoop/common/trunk/hadoop-tools/hadoop-azure/src/test
* /hadoop/common/trunk/hadoop-tools/hadoop-azure/src/test/java
* /hadoop/common/trunk/hadoop-tools/hadoop-azure/src/test/java/org
* /hadoop/common/trunk/hadoop-tools/hadoop-azure/src/test/java/org/apache
* /hadoop/common/trunk/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop
* 
/hadoop/common/trunk/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs
* 
/hadoop/common/trunk/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure
* 
/hadoop/common/trunk/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/AzureBlobStorageTestAccount.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/InMemoryBlockBlobStore.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/MockStorageInterface.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/NativeAzureFileSystemBaseTest.java
* 
/hadoop/common/trunk/hadoop-tool

[jira] [Updated] (HADOOP-9629) Support Windows Azure Storage - Blob as a file system in Hadoop

2014-06-10 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-9629:
--

Fix Version/s: 3.0.0

I have committed this to trunk.  I'm going to let it bake there a bit before 
merging down to branch-2.  I'll keep this issue open until after merging to 
branch-2.

> Support Windows Azure Storage - Blob as a file system in Hadoop
> ---
>
> Key: HADOOP-9629
> URL: https://issues.apache.org/jira/browse/HADOOP-9629
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: tools
>Reporter: Mostafa Elhemali
>Assignee: Mike Liddell
> Fix For: 3.0.0
>
> Attachments: HADOOP-9629 - Azure Filesystem - Information for 
> developers.docx, HADOOP-9629 - Azure Filesystem - Information for 
> developers.pdf, HADOOP-9629.2.patch, HADOOP-9629.3.patch, HADOOP-9629.patch, 
> HADOOP-9629.trunk.1.patch, HADOOP-9629.trunk.2.patch, 
> HADOOP-9629.trunk.3.patch, HADOOP-9629.trunk.4.patch, 
> HADOOP-9629.trunk.5.patch
>
>
> h2. Description
> This JIRA incorporates adding a new file system implementation for accessing 
> Windows Azure Storage - Blob from within Hadoop, such as using blobs as input 
> to MR jobs or configuring MR jobs to put their output directly into blob 
> storage.
> h2. High level design
> At a high level, the code here extends the FileSystem class to provide an 
> implementation for accessing blob storage; the scheme wasb is used for 
> accessing it over HTTP, and wasbs for accessing over HTTPS. We use the URI 
> scheme: {code}wasb[s]://@/path/to/file{code} to address 
> individual blobs. We use the standard Azure Java SDK 
> (com.microsoft.windowsazure) to do most of the work. In order to map a 
> hierarchical file system over the flat name-value pair nature of blob 
> storage, we create a specially tagged blob named path/to/dir whenever we 
> create a directory called path/to/dir, then files under that are stored as 
> normal blobs path/to/dir/file. We have many metrics implemented for it using 
> the Metrics2 interface. Tests are implemented mostly using a mock 
> implementation for the Azure SDK functionality, with an option to test 
> against a real blob storage if configured (instructions provided inside in 
> README.txt).
> h2. Credits and history
> This has been ongoing work for a while, and the early version of this work 
> can be seen in HADOOP-8079. This JIRA is a significant revision of that and 
> we'll post the patch here for Hadoop trunk first, then post a patch for 
> branch-1 as well for backporting the functionality if accepted. Credit for 
> this work goes to the early team: [~minwei], [~davidlao], [~lengningliu] and 
> [~stojanovic] as well as multiple people who have taken over this work since 
> then (hope I don't forget anyone): [~dexterb], Johannes Klein, [~ivanmi], 
> Michael Rys, [~mostafae], [~brian_swan], [~mikelid], [~xifang], and 
> [~chuanliu].
> h2. Test
> Besides unit tests, we have used WASB as the default file system in our 
> service product. (HDFS is also used but not as default file system.) Various 
> different customer and test workloads have been run against clusters with 
> such configurations for quite some time. The current version reflects to the 
> version of the code tested and used in our production environment.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9629) Support Windows Azure Storage - Blob as a file system in Hadoop

2014-06-10 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-9629:
--

Issue Type: New Feature  (was: Improvement)

> Support Windows Azure Storage - Blob as a file system in Hadoop
> ---
>
> Key: HADOOP-9629
> URL: https://issues.apache.org/jira/browse/HADOOP-9629
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: tools
>Reporter: Mostafa Elhemali
>Assignee: Mike Liddell
> Attachments: HADOOP-9629 - Azure Filesystem - Information for 
> developers.docx, HADOOP-9629 - Azure Filesystem - Information for 
> developers.pdf, HADOOP-9629.2.patch, HADOOP-9629.3.patch, HADOOP-9629.patch, 
> HADOOP-9629.trunk.1.patch, HADOOP-9629.trunk.2.patch, 
> HADOOP-9629.trunk.3.patch, HADOOP-9629.trunk.4.patch, 
> HADOOP-9629.trunk.5.patch
>
>
> h2. Description
> This JIRA incorporates adding a new file system implementation for accessing 
> Windows Azure Storage - Blob from within Hadoop, such as using blobs as input 
> to MR jobs or configuring MR jobs to put their output directly into blob 
> storage.
> h2. High level design
> At a high level, the code here extends the FileSystem class to provide an 
> implementation for accessing blob storage; the scheme wasb is used for 
> accessing it over HTTP, and wasbs for accessing over HTTPS. We use the URI 
> scheme: {code}wasb[s]://@/path/to/file{code} to address 
> individual blobs. We use the standard Azure Java SDK 
> (com.microsoft.windowsazure) to do most of the work. In order to map a 
> hierarchical file system over the flat name-value pair nature of blob 
> storage, we create a specially tagged blob named path/to/dir whenever we 
> create a directory called path/to/dir, then files under that are stored as 
> normal blobs path/to/dir/file. We have many metrics implemented for it using 
> the Metrics2 interface. Tests are implemented mostly using a mock 
> implementation for the Azure SDK functionality, with an option to test 
> against a real blob storage if configured (instructions provided inside in 
> README.txt).
> h2. Credits and history
> This has been ongoing work for a while, and the early version of this work 
> can be seen in HADOOP-8079. This JIRA is a significant revision of that and 
> we'll post the patch here for Hadoop trunk first, then post a patch for 
> branch-1 as well for backporting the functionality if accepted. Credit for 
> this work goes to the early team: [~minwei], [~davidlao], [~lengningliu] and 
> [~stojanovic] as well as multiple people who have taken over this work since 
> then (hope I don't forget anyone): [~dexterb], Johannes Klein, [~ivanmi], 
> Michael Rys, [~mostafae], [~brian_swan], [~mikelid], [~xifang], and 
> [~chuanliu].
> h2. Test
> Besides unit tests, we have used WASB as the default file system in our 
> service product. (HDFS is also used but not as default file system.) Various 
> different customer and test workloads have been run against clusters with 
> such configurations for quite some time. The current version reflects to the 
> version of the code tested and used in our production environment.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10679) Authorize webui access using ServiceAuthorizationManager

2014-06-10 Thread Benoy Antony (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14027125#comment-14027125
 ] 

Benoy Antony commented on HADOOP-10679:
---

Here is the proposal:

1. Define an AuthorizationFilter. 
2. The AuthorizationFilter looks up ACL in hadoop-policy.xml using the key 
derived from  HttpServletRequest.getServletPath() .
3. If ACL is not found,  the ACL defaults to *.  

This will inherit the following features (in progress)
Note 1 : Administrator can override default ACL - HADOOP-10649
Note 2 : Administrator can specify a reverse ACL - HADOOP-10650
Note 3 : Administrator block/grant access via IPS - HADOOP-10651
Note 4 : One can plugin a different AuthZ module  - HADOOP-10654




> Authorize webui access using ServiceAuthorizationManager
> 
>
> Key: HADOOP-10679
> URL: https://issues.apache.org/jira/browse/HADOOP-10679
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Reporter: Benoy Antony
>Assignee: Benoy Antony
>
> Currently accessing Hadoop via RPC can be authorized using 
> _ServiceAuthorizationManager_. But there is no uniform authorization of the 
> HTTP access. Some of the servlets check for admin privilege. 
> This creates an inconsistency of authorization between access via RPC vs 
> HTTP. 
> The fix is to enable authorization of the webui access using 
> _ServiceAuthorizationManager_. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10641) Introduce Coordination Engine

2014-06-10 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14027111#comment-14027111
 ] 

Konstantin Shvachko commented on HADOOP-10641:
--

> there's not much expertise in Hadoop for the general problem of distributed 
> consensus

I just want to make sure we are on the same page here. The intent of this jira 
is not to solve the general problem of distributed consensus. That is, I do not 
propose to build an implementation of paxos or other coordination algorithms 
here. This is only to introduce a common interface, so that real 
implementations such as ZooKeeper could be plugged into hadoop projects.

> Introduce Coordination Engine
> -
>
> Key: HADOOP-10641
> URL: https://issues.apache.org/jira/browse/HADOOP-10641
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Konstantin Shvachko
>Assignee: Plamen Jeliazkov
> Attachments: HADOOP-10641.patch, HADOOP-10641.patch, 
> HADOOP-10641.patch
>
>
> Coordination Engine (CE) is a system, which allows to agree on a sequence of 
> events in a distributed system. In order to be reliable CE should be 
> distributed by itself.
> Coordination Engine can be based on different algorithms (paxos, raft, 2PC, 
> zab) and have different implementations, depending on use cases, reliability, 
> availability, and performance requirements.
> CE should have a common API, so that it could serve as a pluggable component 
> in different projects. The immediate beneficiaries are HDFS (HDFS-6469) and 
> HBase (HBASE-10909).
> First implementation is proposed to be based on ZooKeeper.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10679) Authorize webui access using ServiceAuthorizationManager

2014-06-10 Thread Benoy Antony (JIRA)
Benoy Antony created HADOOP-10679:
-

 Summary: Authorize webui access using ServiceAuthorizationManager
 Key: HADOOP-10679
 URL: https://issues.apache.org/jira/browse/HADOOP-10679
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Reporter: Benoy Antony
Assignee: Benoy Antony


Currently accessing Hadoop via RPC can be authorized using 
_ServiceAuthorizationManager_. But there is no uniform authorization of the 
HTTP access. Some of the servlets check for admin privilege. 
This creates an inconsistency of authorization between access via RPC vs HTTP. 

The fix is to enable authorization of the webui access using 
_ServiceAuthorizationManager_. 





--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10376) Refactor refresh*Protocols into a single generic refreshConfigProtocol

2014-06-10 Thread Chris Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Li updated HADOOP-10376:
--

Attachment: HADOOP-10376.patch

[~benoyantony] good catch, made mutators synchronized. Also on name, I'm okay 
with RefreshHandlerRegistry if people think it's more clear. I do like that 
RefreshRegistry is concise though

[~wuzesheng] sounds like a good idea when we replace old refreshprotos in later 
patches with rewritten ones. 

> Refactor refresh*Protocols into a single generic refreshConfigProtocol
> --
>
> Key: HADOOP-10376
> URL: https://issues.apache.org/jira/browse/HADOOP-10376
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Chris Li
>Assignee: Chris Li
>Priority: Minor
> Attachments: HADOOP-10376.patch, HADOOP-10376.patch, 
> HADOOP-10376.patch, HADOOP-10376.patch, RefreshFrameworkProposal.pdf
>
>
> See https://issues.apache.org/jira/browse/HADOOP-10285
> There are starting to be too many refresh*Protocols We can refactor them to 
> use a single protocol with a variable payload to choose what to do.
> Thereafter, we can return an indication of success or failure.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10389) Native RPCv9 client

2014-06-10 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14026987#comment-14026987
 ] 

Haohui Mai commented on HADOOP-10389:
-

bq. No wheels are being reinvented... we are using libuv for our portability 
layer and other libraries where appropriate.

What make me concerned is that the code has to bring in a lot more dependency 
in plain C, which has a high cost on maintenance. For example, this patch at 
least contains implementation of linked list, splay tress, hash tables, and rb 
trees. There are a lot of overheads on implementing, reviewing and testing the 
code. For example, a lot of time has to waste on issue like the following:

https://issues.apache.org/jira/browse/HADOOP-10640?focusedCommentId=14026841&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14026841

The above link demonstrates that even the code is directly copied from other 
places, the overheads of reviewing and maintaining them are ineligible. I 
anticipate that down the road the problem will only get worse. For example, do 
you considering supporting filenames in unicode? That way I think libicu might 
need to be brought into the picture.

It looks to me that it is much more compelling to implement the code in a more 
modern language, say, c++11, where much of the headache right now is taken away 
by a mature standard library.

> Native RPCv9 client
> ---
>
> Key: HADOOP-10389
> URL: https://issues.apache.org/jira/browse/HADOOP-10389
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: HADOOP-10388
>Reporter: Binglin Chang
>Assignee: Colin Patrick McCabe
> Attachments: HADOOP-10388.001.patch, HADOOP-10389.002.patch, 
> HADOOP-10389.004.patch, HADOOP-10389.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10678) Unnecessary synchronization on collection used for only tests

2014-06-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14026888#comment-14026888
 ] 

Hadoop QA commented on HADOOP-10678:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12649642/HADOOP-10678.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4040//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4040//console

This message is automatically generated.

> Unnecessary synchronization on collection used for only tests
> -
>
> Key: HADOOP-10678
> URL: https://issues.apache.org/jira/browse/HADOOP-10678
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Benoy Antony
>Assignee: Benoy Antony
>Priority: Minor
> Attachments: HADOOP-10678.patch
>
>
> The function _SecurityUtil.getKerberosInfo()_  is a function used during 
> authentication and authorization. 
> It has two synchronized blocks and one of them is on testProviders. This is 
> an unnecessary lock given that the testProviders is empty in real scenario.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10656) The password keystore file is not picked by LDAP group mapping

2014-06-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14026872#comment-14026872
 ] 

Hadoop QA commented on HADOOP-10656:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12649638/HADOOP-10656.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ha.TestZKFailoverController

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4039//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4039//console

This message is automatically generated.

> The password keystore file is not picked by LDAP group mapping
> --
>
> Key: HADOOP-10656
> URL: https://issues.apache.org/jira/browse/HADOOP-10656
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HADOOP-10656.patch
>
>
> The user configured password file(LDAP_KEYSTORE_PASSWORD_FILE_KEY) will not 
> be picked by LdapGroupsMapping:
> In setConf():
> {noformat}
> keystorePass =
> conf.get(LDAP_KEYSTORE_PASSWORD_KEY, LDAP_KEYSTORE_PASSWORD_DEFAULT);
> if (keystorePass.isEmpty()) {
>   keystorePass = extractPassword(
> conf.get(LDAP_KEYSTORE_PASSWORD_KEY, 
> LDAP_KEYSTORE_PASSWORD_DEFAULT)); 
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10668) TestZKFailoverControllerStress#testExpireBackAndForth occasionally fails

2014-06-10 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14026870#comment-14026870
 ] 

Ted Yu commented on HADOOP-10668:
-

I saw the test failure on Jenkins.

Will keep an eye on Jenkins in the future.

> TestZKFailoverControllerStress#testExpireBackAndForth occasionally fails
> 
>
> Key: HADOOP-10668
> URL: https://issues.apache.org/jira/browse/HADOOP-10668
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Ted Yu
>Priority: Minor
>  Labels: test
>
> From 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/4018//testReport/org.apache.hadoop.ha/TestZKFailoverControllerStress/testExpireBackAndForth/
>  :
> {code}
> org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode
>   at org.apache.zookeeper.server.DataTree.getData(DataTree.java:648)
>   at org.apache.zookeeper.server.ZKDatabase.getData(ZKDatabase.java:371)
>   at 
> org.apache.hadoop.ha.MiniZKFCCluster.expireActiveLockHolder(MiniZKFCCluster.java:199)
>   at 
> org.apache.hadoop.ha.MiniZKFCCluster.expireAndVerifyFailover(MiniZKFCCluster.java:234)
>   at 
> org.apache.hadoop.ha.TestZKFailoverControllerStress.testExpireBackAndForth(TestZKFailoverControllerStress.java:84)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10668) TestZKFailoverControllerStress#testExpireBackAndForth occasionally fails

2014-06-10 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14026845#comment-14026845
 ] 

Chris Nauroth commented on HADOOP-10668:


I've committed HADOOP-9555.  I've run {{TestZKFailoverControllerStress}} 
multiple times with no failures.  Maybe the ZooKeeper upgrade fixed this by 
side effect.  Ted, do you have a consistent repro, or were you only seeing it 
on Jenkins?  If so, then maybe we need to wait and see how the test behaves on 
Jenkins over the next several days.

> TestZKFailoverControllerStress#testExpireBackAndForth occasionally fails
> 
>
> Key: HADOOP-10668
> URL: https://issues.apache.org/jira/browse/HADOOP-10668
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Ted Yu
>Priority: Minor
>  Labels: test
>
> From 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/4018//testReport/org.apache.hadoop.ha/TestZKFailoverControllerStress/testExpireBackAndForth/
>  :
> {code}
> org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode
>   at org.apache.zookeeper.server.DataTree.getData(DataTree.java:648)
>   at org.apache.zookeeper.server.ZKDatabase.getData(ZKDatabase.java:371)
>   at 
> org.apache.hadoop.ha.MiniZKFCCluster.expireActiveLockHolder(MiniZKFCCluster.java:199)
>   at 
> org.apache.hadoop.ha.MiniZKFCCluster.expireAndVerifyFailover(MiniZKFCCluster.java:234)
>   at 
> org.apache.hadoop.ha.TestZKFailoverControllerStress.testExpireBackAndForth(TestZKFailoverControllerStress.java:84)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10678) Unnecessary synchronization on collection used for only tests

2014-06-10 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HADOOP-10678:
--

Summary: Unnecessary synchronization on collection used for only tests  
(was: Unnecessary synchronization on collection used for test)

> Unnecessary synchronization on collection used for only tests
> -
>
> Key: HADOOP-10678
> URL: https://issues.apache.org/jira/browse/HADOOP-10678
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Benoy Antony
>Assignee: Benoy Antony
>Priority: Minor
> Attachments: HADOOP-10678.patch
>
>
> The function _SecurityUtil.getKerberosInfo()_  is a function used during 
> authentication and authorization. 
> It has two synchronized blocks and one of them is on testProviders. This is 
> an unnecessary lock given that the testProviders is empty in real scenario.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10640) Implement Namenode RPCs in HDFS native client

2014-06-10 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14026841#comment-14026841
 ] 

Colin Patrick McCabe commented on HADOOP-10640:
---

bq. Do we need to call free in hadoop_err_prepend.c for asprintf error cases? 
Docs say no memory is allocated in this case.

The string being freed here was allocated further up in the function.  It's not 
allocated by the (failed) asprintf.

bq. The hash table implementation has an unbounded while loop. Though, it will 
probably never happen since we guarantee there will always be an open spot, 
would we add a terminal case to it?

The hash table is never more than half full.  Check this code in {{htable_put}}:
{code}
+// Re-hash if we have used more than half of the hash table
+nused = htable->used + 1;
+if (nused >= (htable->capacity / 2)) {
+ret = htable_realloc(htable, htable->capacity * 2);
+if (ret)
+return ret;
+}
+htable_insert_internal(htable->elem, htable->capacity,
+htable->hash_fun, key, val);
{code}

I will add a comment to {{htable_insert_internal}} making this invariant clear.

bq. Should the above hash table be modified to allow custom hash functions in 
the future? Modifications would include ensuring the hash function was within 
bounds, providing an interface, etc.

Already done :)

{code}
+struct htable *htable_alloc(uint32_t size,
+htable_hash_fn_t hash_fun, htable_eq_fn_t eq_fun)
{code}

You can supply your own hash function as the {{hash_fun}} argument.

bq. The config object seems to be using the builder pattern. Wouldn't it make 
sense to just create a configuration object and provide 'set' and 'get' 
functions? Unless the configuration object is immutable?

The configuration object is immutable once created.  I wanted to avoid 
multithreading problems with get and set... we've had a lot of those with 
{{Configuration}} in Hadoop.  This also simplifies the C code, since we can 
simply use strings from inside the {{hconf}} object without worrying about 
whether someone is going to {{free}} them while we're using them.  This means, 
for example, that we don't need to copy them inside {{hconf_get}}.  All the 
strings get freed at the end, when the {{hconf}} is freed.  I'll add a comment 
that hconf is immutable.

> Implement Namenode RPCs in HDFS native client
> -
>
> Key: HADOOP-10640
> URL: https://issues.apache.org/jira/browse/HADOOP-10640
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: native
>Affects Versions: HADOOP-10388
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HADOOP-10640-pnative.001.patch, 
> HADOOP-10640-pnative.002.patch, HADOOP-10640-pnative.003.patch
>
>
> Implement the parts of libhdfs that just involve making RPCs to the Namenode, 
> such as mkdir, rename, etc.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10678) Unnecessary synchronization on collection used for test

2014-06-10 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HADOOP-10678:
--

Status: Patch Available  (was: Open)

> Unnecessary synchronization on collection used for test
> ---
>
> Key: HADOOP-10678
> URL: https://issues.apache.org/jira/browse/HADOOP-10678
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Benoy Antony
>Assignee: Benoy Antony
>Priority: Minor
> Attachments: HADOOP-10678.patch
>
>
> The function _SecurityUtil.getKerberosInfo()_  is a function used during 
> authentication and authorization. 
> It has two synchronized blocks and one of them is on testProviders. This is 
> an unnecessary lock given that the testProviders is empty in real scenario.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Moved] (HADOOP-10678) Unnecessary synchronization on collection used for test

2014-06-10 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony moved HDFS-6512 to HADOOP-10678:
-

Component/s: (was: security)
 security
Key: HADOOP-10678  (was: HDFS-6512)
Project: Hadoop Common  (was: Hadoop HDFS)

> Unnecessary synchronization on collection used for test
> ---
>
> Key: HADOOP-10678
> URL: https://issues.apache.org/jira/browse/HADOOP-10678
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Benoy Antony
>Assignee: Benoy Antony
>Priority: Minor
> Attachments: HADOOP-10678.patch
>
>
> The function _SecurityUtil.getKerberosInfo()_  is a function used during 
> authentication and authorization. 
> It has two synchronized blocks and one of them is on testProviders. This is 
> an unnecessary lock given that the testProviders is empty in real scenario.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10678) Unnecessary synchronization on collection used for test

2014-06-10 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HADOOP-10678:
--

Attachment: HADOOP-10678.patch

Attaching the patch for review. No test cases are added since it doesn't change 
any functionality.

> Unnecessary synchronization on collection used for test
> ---
>
> Key: HADOOP-10678
> URL: https://issues.apache.org/jira/browse/HADOOP-10678
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Benoy Antony
>Assignee: Benoy Antony
>Priority: Minor
> Attachments: HADOOP-10678.patch
>
>
> The function _SecurityUtil.getKerberosInfo()_  is a function used during 
> authentication and authorization. 
> It has two synchronized blocks and one of them is on testProviders. This is 
> an unnecessary lock given that the testProviders is empty in real scenario.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10656) The password keystore file is not picked by LDAP group mapping

2014-06-10 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HADOOP-10656:


Status: Patch Available  (was: Open)

> The password keystore file is not picked by LDAP group mapping
> --
>
> Key: HADOOP-10656
> URL: https://issues.apache.org/jira/browse/HADOOP-10656
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HADOOP-10656.patch
>
>
> The user configured password file(LDAP_KEYSTORE_PASSWORD_FILE_KEY) will not 
> be picked by LdapGroupsMapping:
> In setConf():
> {noformat}
> keystorePass =
> conf.get(LDAP_KEYSTORE_PASSWORD_KEY, LDAP_KEYSTORE_PASSWORD_DEFAULT);
> if (keystorePass.isEmpty()) {
>   keystorePass = extractPassword(
> conf.get(LDAP_KEYSTORE_PASSWORD_KEY, 
> LDAP_KEYSTORE_PASSWORD_DEFAULT)); 
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HADOOP-10656) The password keystore file is not picked by LDAP group mapping

2014-06-10 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li reassigned HADOOP-10656:
---

Assignee: Brandon Li

> The password keystore file is not picked by LDAP group mapping
> --
>
> Key: HADOOP-10656
> URL: https://issues.apache.org/jira/browse/HADOOP-10656
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HADOOP-10656.patch
>
>
> The user configured password file(LDAP_KEYSTORE_PASSWORD_FILE_KEY) will not 
> be picked by LdapGroupsMapping:
> In setConf():
> {noformat}
> keystorePass =
> conf.get(LDAP_KEYSTORE_PASSWORD_KEY, LDAP_KEYSTORE_PASSWORD_DEFAULT);
> if (keystorePass.isEmpty()) {
>   keystorePass = extractPassword(
> conf.get(LDAP_KEYSTORE_PASSWORD_KEY, 
> LDAP_KEYSTORE_PASSWORD_DEFAULT)); 
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10656) The password keystore file is not picked by LDAP group mapping

2014-06-10 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HADOOP-10656:


Attachment: HADOOP-10656.patch

> The password keystore file is not picked by LDAP group mapping
> --
>
> Key: HADOOP-10656
> URL: https://issues.apache.org/jira/browse/HADOOP-10656
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.2.0
>Reporter: Brandon Li
> Attachments: HADOOP-10656.patch
>
>
> The user configured password file(LDAP_KEYSTORE_PASSWORD_FILE_KEY) will not 
> be picked by LdapGroupsMapping:
> In setConf():
> {noformat}
> keystorePass =
> conf.get(LDAP_KEYSTORE_PASSWORD_KEY, LDAP_KEYSTORE_PASSWORD_DEFAULT);
> if (keystorePass.isEmpty()) {
>   keystorePass = extractPassword(
> conf.get(LDAP_KEYSTORE_PASSWORD_KEY, 
> LDAP_KEYSTORE_PASSWORD_DEFAULT)); 
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9555) HA functionality that uses ZooKeeper may experience inadvertent TCP RST and miss session expiration event due to bug in client connection management

2014-06-10 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-9555:
--

   Resolution: Fixed
Fix Version/s: 2.5.0
   3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I committed this to trunk and branch-2.  Arpit, thank you for the review.

> HA functionality that uses ZooKeeper may experience inadvertent TCP RST and 
> miss session expiration event due to bug in client connection management
> 
>
> Key: HADOOP-9555
> URL: https://issues.apache.org/jira/browse/HADOOP-9555
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ha
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 3.0.0, 2.5.0
>
> Attachments: HADOOP-9555.1.patch
>
>
> ZOOKEEPER-1702 tracks a client connection management bug.  The bug can cause 
> an unexpected TCP RST that ultimately prevents delivery of a session 
> expiration event.  The symptoms of the bug seem to show up more frequently on 
> Windows than on other platforms (though it's not really a Windows-specific 
> bug).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9555) HA functionality that uses ZooKeeper may experience inadvertent TCP RST and miss session expiration event due to bug in client connection management

2014-06-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14026777#comment-14026777
 ] 

Hudson commented on HADOOP-9555:


SUCCESS: Integrated in Hadoop-trunk-Commit #5674 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5674/])
HADOOP-9555. HA functionality that uses ZooKeeper may experience inadvertent 
TCP RST and miss session expiration event due to bug in client connection 
management. Contributed by Chris Nauroth. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1601709)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-project/pom.xml


> HA functionality that uses ZooKeeper may experience inadvertent TCP RST and 
> miss session expiration event due to bug in client connection management
> 
>
> Key: HADOOP-9555
> URL: https://issues.apache.org/jira/browse/HADOOP-9555
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ha
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-9555.1.patch
>
>
> ZOOKEEPER-1702 tracks a client connection management bug.  The bug can cause 
> an unexpected TCP RST that ultimately prevents delivery of a session 
> expiration event.  The symptoms of the bug seem to show up more frequently on 
> Windows than on other platforms (though it's not really a Windows-specific 
> bug).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10640) Implement Namenode RPCs in HDFS native client

2014-06-10 Thread Abraham Elmahrek (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14026747#comment-14026747
 ] 

Abraham Elmahrek commented on HADOOP-10640:
---

Awesome stuff Colin. Just a few comments:

* Do we need to call free in hadoop_err_prepend.c for asprintf error cases? 
Docs say no memory is allocated in this case.
{code}
if (asprintf(&nmsg, "%s: %s", prepend_str, err->msg) < 0) {
free(prepend_str);
return (struct hadoop_err*)err;
}
{code}

* The hash table implementation has an unbounded while loop. Though, it will 
probably never happen since we guarantee there will always be an open spot, 
would we add a terminal case to it?
{code}
static void htable_insert_internal(struct htable_pair *nelem, 
uint32_t capacity, htable_hash_fn_t hash_fun, void *key,
void *val)
{
uint32_t i;

i = hash_fun(key, capacity);
while (1) {
if (!nelem[i].key) {
nelem[i].key = key;
nelem[i].val = val;
return;
}
i++;
if (i == capacity) {
i = 0;
}
}
}
{code}

* Should the above hash table be modified to allow custom hash functions in the 
future? Modifications would include ensuring the hash function was within 
bounds, providing an interface, etc.

* The config object seems to be using the builder pattern. Wouldn't it make 
sense to just create a configuration object and provide 'set' and 'get' 
functions? Unless the configuration object is immutable?


> Implement Namenode RPCs in HDFS native client
> -
>
> Key: HADOOP-10640
> URL: https://issues.apache.org/jira/browse/HADOOP-10640
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: native
>Affects Versions: HADOOP-10388
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HADOOP-10640-pnative.001.patch, 
> HADOOP-10640-pnative.002.patch, HADOOP-10640-pnative.003.patch
>
>
> Implement the parts of libhdfs that just involve making RPCs to the Namenode, 
> such as mkdir, rename, etc.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9555) HA functionality that uses ZooKeeper may experience inadvertent TCP RST and miss session expiration event due to bug in client connection management

2014-06-10 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14026713#comment-14026713
 ] 

Arpit Agarwal commented on HADOOP-9555:
---

+1, thanks for the followup fix Chris!

> HA functionality that uses ZooKeeper may experience inadvertent TCP RST and 
> miss session expiration event due to bug in client connection management
> 
>
> Key: HADOOP-9555
> URL: https://issues.apache.org/jira/browse/HADOOP-9555
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ha
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-9555.1.patch
>
>
> ZOOKEEPER-1702 tracks a client connection management bug.  The bug can cause 
> an unexpected TCP RST that ultimately prevents delivery of a session 
> expiration event.  The symptoms of the bug seem to show up more frequently on 
> Windows than on other platforms (though it's not really a Windows-specific 
> bug).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10646) KeyProvider buildVersionName method should be moved to a utils class

2014-06-10 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14026700#comment-14026700
 ] 

Alejandro Abdelnur commented on HADOOP-10646:
-

[~lmccay], i was thinking about that method as well. I may have time later this 
week, if you want to take over and repurpose this JIRA for both methods, go for 
it. THX

> KeyProvider buildVersionName method should be moved to a utils class
> 
>
> Key: HADOOP-10646
> URL: https://issues.apache.org/jira/browse/HADOOP-10646
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 3.0.0
>
>
> The buildVersionName() method should not be part of the KeyProvider public 
> API because keyversions could be opaque (not built based on the keyname and 
> key generation counter).
> KeyProvider implementations may choose to use buildVersionName() for reasons 
> such as described in HADOOP-10611.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10646) KeyProvider buildVersionName method should be moved to a utils class

2014-06-10 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14026682#comment-14026682
 ] 

Larry McCay commented on HADOOP-10646:
--

Hi [~tucu00] - I am looking at moving unnestUri into a utils class as well.
What is your status on this jira?

We should make it the same utils class.
I'm thinking ProviderUtils.

Any thoughts?

> KeyProvider buildVersionName method should be moved to a utils class
> 
>
> Key: HADOOP-10646
> URL: https://issues.apache.org/jira/browse/HADOOP-10646
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 3.0.0
>
>
> The buildVersionName() method should not be part of the KeyProvider public 
> API because keyversions could be opaque (not built based on the keyname and 
> key generation counter).
> KeyProvider implementations may choose to use buildVersionName() for reasons 
> such as described in HADOOP-10611.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10646) KeyProvider buildVersionName method should be moved to a utils class

2014-06-10 Thread Owen O'Malley (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1402#comment-1402
 ] 

Owen O'Malley commented on HADOOP-10646:


It is a *feature* to have understandable key version names. The job tracker 
used to create unique identifiers for job ids and task ids too, but we fixed it 
to use patterns. As a result you can actually understand what is happening.

Do not disable the feature.

> KeyProvider buildVersionName method should be moved to a utils class
> 
>
> Key: HADOOP-10646
> URL: https://issues.apache.org/jira/browse/HADOOP-10646
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 3.0.0
>
>
> The buildVersionName() method should not be part of the KeyProvider public 
> API because keyversions could be opaque (not built based on the keyname and 
> key generation counter).
> KeyProvider implementations may choose to use buildVersionName() for reasons 
> such as described in HADOOP-10611.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10557) FsShell -cp -p does not preserve extended ACLs

2014-06-10 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14026630#comment-14026630
 ] 

Chris Nauroth commented on HADOOP-10557:


[~ajisakaa], thank you for working on this, and [~jira.shegalov], thank you for 
code reviewing.  May I take a look before anything gets committed?  I expect 
I'll have time no later than Thursday, 6/12.

> FsShell -cp -p does not preserve extended ACLs
> --
>
> Key: HADOOP-10557
> URL: https://issues.apache.org/jira/browse/HADOOP-10557
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
> Attachments: HADOOP-10557.2.patch, HADOOP-10557.3.patch, 
> HADOOP-10557.patch
>
>
> This issue tracks enhancing FsShell cp to
> * preserve extended ACLs by -p option
> or
> * add a new command-line option for preserving extended ACLs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10677) ExportSnapshot fails on kerberized cluster using s3a

2014-06-10 Thread David S. Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David S. Wang updated HADOOP-10677:
---

Attachment: HADOOP-10677-1.patch

Thanks to Matteo Bertozzi for the patch!

> ExportSnapshot fails on kerberized cluster using s3a
> 
>
> Key: HADOOP-10677
> URL: https://issues.apache.org/jira/browse/HADOOP-10677
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.4.0
>Reporter: David S. Wang
>Assignee: David S. Wang
> Attachments: HADOOP-10677-1.patch
>
>
> When using HBase ExportSnapshot on a kerberized cluster, exporting to s3a 
> using HADOOP-10400, we see the following problem:
> Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: 
> patch283two
>   at 
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:414)
> The problem seems to be that the patch in HADOOP-10400 does not have 
> getCanonicalServiceName().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10677) ExportSnapshot fails on kerberized cluster using s3a

2014-06-10 Thread David S. Wang (JIRA)
David S. Wang created HADOOP-10677:
--

 Summary: ExportSnapshot fails on kerberized cluster using s3a
 Key: HADOOP-10677
 URL: https://issues.apache.org/jira/browse/HADOOP-10677
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.4.0
Reporter: David S. Wang
Assignee: David S. Wang
 Attachments: HADOOP-10677-1.patch

When using HBase ExportSnapshot on a kerberized cluster, exporting to s3a using 
HADOOP-10400, we see the following problem:

Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: 
patch283two
at 
org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:414)

The problem seems to be that the patch in HADOOP-10400 does not have 
getCanonicalServiceName().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10607) Create an API to Separate Credentials/Password Storage from Applications

2014-06-10 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14026618#comment-14026618
 ] 

Larry McCay commented on HADOOP-10607:
--

I should have a patch to address those points some point today.
Thanks for the review, Owen!

> Create an API to Separate Credentials/Password Storage from Applications
> 
>
> Key: HADOOP-10607
> URL: https://issues.apache.org/jira/browse/HADOOP-10607
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Fix For: 3.0.0
>
> Attachments: 10607-2.patch, 10607-3.patch, 10607-4.patch, 
> 10607-5.patch, 10607-6.patch, 10607-7.patch, 10607-8.patch, 10607-9.patch, 
> 10607.patch
>
>
> As with the filesystem API, we need to provide a generic mechanism to support 
> multiple credential storage mechanisms that are potentially from third 
> parties. 
> We need the ability to eliminate the storage of passwords and secrets in 
> clear text within configuration files or within code.
> Toward that end, I propose an API that is configured using a list of URLs of 
> CredentialProviders. The implementation will look for implementations using 
> the ServiceLoader interface and thus support third party libraries.
> Two providers will be included in this patch. One using the credentials cache 
> in MapReduce jobs and the other using Java KeyStores from either HDFS or 
> local file system. 
> A CredShell CLI will also be included in this patch which provides the 
> ability to manage the credentials within the stores.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10376) Refactor refresh*Protocols into a single generic refreshConfigProtocol

2014-06-10 Thread Benoy Antony (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14026616#comment-14026616
 ] 

Benoy Antony commented on HADOOP-10376:
---

In class _RefreshRegistry_, the state could be updated/accessed by different 
threads and so you need to synchronize the mutators and accessors. 


> Refactor refresh*Protocols into a single generic refreshConfigProtocol
> --
>
> Key: HADOOP-10376
> URL: https://issues.apache.org/jira/browse/HADOOP-10376
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Chris Li
>Assignee: Chris Li
>Priority: Minor
> Attachments: HADOOP-10376.patch, HADOOP-10376.patch, 
> HADOOP-10376.patch, RefreshFrameworkProposal.pdf
>
>
> See https://issues.apache.org/jira/browse/HADOOP-10285
> There are starting to be too many refresh*Protocols We can refactor them to 
> use a single protocol with a variable payload to choose what to do.
> Thereafter, we can return an indication of success or failure.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10376) Refactor refresh*Protocols into a single generic refreshConfigProtocol

2014-06-10 Thread Benoy Antony (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14026620#comment-14026620
 ] 

Benoy Antony commented on HADOOP-10376:
---

Would _RefreshHandlerRegistry_ be a better name than _ RefreshRegistry_ ?

> Refactor refresh*Protocols into a single generic refreshConfigProtocol
> --
>
> Key: HADOOP-10376
> URL: https://issues.apache.org/jira/browse/HADOOP-10376
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Chris Li
>Assignee: Chris Li
>Priority: Minor
> Attachments: HADOOP-10376.patch, HADOOP-10376.patch, 
> HADOOP-10376.patch, RefreshFrameworkProposal.pdf
>
>
> See https://issues.apache.org/jira/browse/HADOOP-10285
> There are starting to be too many refresh*Protocols We can refactor them to 
> use a single protocol with a variable payload to choose what to do.
> Thereafter, we can return an indication of success or failure.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10676) S3AOutputStream not reading new config knobs for multipart configs

2014-06-10 Thread David S. Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David S. Wang updated HADOOP-10676:
---

Attachment: HADOOP-10676-1.patch

> S3AOutputStream not reading new config knobs for multipart configs
> --
>
> Key: HADOOP-10676
> URL: https://issues.apache.org/jira/browse/HADOOP-10676
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.4.0
>Reporter: David S. Wang
>Assignee: David S. Wang
> Attachments: HADOOP-10676-1.patch
>
>
> S3AOutputStream.java does not have the code to read the new config knobs for 
> multipart configs.  This patch will add that.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10676) S3AOutputStream not reading new config knobs for multipart configs

2014-06-10 Thread David S. Wang (JIRA)
David S. Wang created HADOOP-10676:
--

 Summary: S3AOutputStream not reading new config knobs for 
multipart configs
 Key: HADOOP-10676
 URL: https://issues.apache.org/jira/browse/HADOOP-10676
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.4.0
Reporter: David S. Wang
Assignee: David S. Wang


S3AOutputStream.java does not have the code to read the new config knobs for 
multipart configs.  This patch will add that.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10675) Add server-side encryption functionality to s3a

2014-06-10 Thread David S. Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David S. Wang updated HADOOP-10675:
---

Attachment: HADOOP-10675-2.patch

> Add server-side encryption functionality to s3a
> ---
>
> Key: HADOOP-10675
> URL: https://issues.apache.org/jira/browse/HADOOP-10675
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.4.0
>Reporter: David S. Wang
>Assignee: David S. Wang
> Attachments: HADOOP-10675-1.patch, HADOOP-10675-2.patch
>
>
> The current patch for s3a in HADOOP-10400 does not have the capability to 
> specify server-side encryption.  This JIRA will track the addition of such 
> functionality to HADOOP-10400, similar to what was done in HADOOP-10568 for 
> s3n.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9099) NetUtils.normalizeHostName fails on domains where UnknownHost resolves to an IP address

2014-06-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14026556#comment-14026556
 ] 

Hudson commented on HADOOP-9099:


FAILURE: Integrated in Hadoop-Mapreduce-trunk #1797 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1797/])
Moving CHANGES.txt entry for HADOOP-9099 to the correct section. (atm: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1601482)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


> NetUtils.normalizeHostName fails on domains where UnknownHost resolves to an 
> IP address
> ---
>
> Key: HADOOP-9099
> URL: https://issues.apache.org/jira/browse/HADOOP-9099
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 1-win
>Reporter: Ivan Mitic
>Assignee: Ivan Mitic
>Priority: Minor
> Fix For: 1.2.0, 1-win, 2.5.0
>
> Attachments: HADOOP-9099.branch-1-win.patch, HADOOP-9099.trunk.patch
>
>
> I just hit this failure. We should use some more unique string for 
> "UnknownHost":
> Testcase: testNormalizeHostName took 0.007 sec
>   FAILED
> expected:<[65.53.5.181]> but was:<[UnknownHost]>
> junit.framework.AssertionFailedError: expected:<[65.53.5.181]> but 
> was:<[UnknownHost]>
>   at 
> org.apache.hadoop.net.TestNetUtils.testNormalizeHostName(TestNetUtils.java:347)
> Will post a patch in a bit.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10664) TestNetUtils.testNormalizeHostName fails

2014-06-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14026560#comment-14026560
 ] 

Hudson commented on HADOOP-10664:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1797 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1797/])
HADOOP-10664. TestNetUtils.testNormalizeHostName fails. Contributed by Aaron T. 
Myers. (atm: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1601478)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java


> TestNetUtils.testNormalizeHostName fails
> 
>
> Key: HADOOP-10664
> URL: https://issues.apache.org/jira/browse/HADOOP-10664
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Chen He
>Assignee: Aaron T. Myers
>  Labels: test
> Fix For: 2.5.0
>
> Attachments: HADOOP-10664.patch
>
>
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertFalse(Assert.java:64)
>   at org.junit.Assert.assertFalse(Assert.java:74)
>   at 
> org.apache.hadoop.net.TestNetUtils.testNormalizeHostName(TestNetUtils.java:617)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9555) HA functionality that uses ZooKeeper may experience inadvertent TCP RST and miss session expiration event due to bug in client connection management

2014-06-10 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14026513#comment-14026513
 ] 

Chris Nauroth commented on HADOOP-9555:
---

bq. -1 tests included. The patch doesn't appear to include any new or modified 
tests.

There are no new tests, but the new version of ZooKeeper, including the 
ZOOKEEPER-1702 patch, gets our existing ZK-related tests passing consistently 
on Windows.

> HA functionality that uses ZooKeeper may experience inadvertent TCP RST and 
> miss session expiration event due to bug in client connection management
> 
>
> Key: HADOOP-9555
> URL: https://issues.apache.org/jira/browse/HADOOP-9555
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ha
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-9555.1.patch
>
>
> ZOOKEEPER-1702 tracks a client connection management bug.  The bug can cause 
> an unexpected TCP RST that ultimately prevents delivery of a session 
> expiration event.  The symptoms of the bug seem to show up more frequently on 
> Windows than on other platforms (though it's not really a Windows-specific 
> bug).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10664) TestNetUtils.testNormalizeHostName fails

2014-06-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14026458#comment-14026458
 ] 

Hudson commented on HADOOP-10664:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #1770 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1770/])
HADOOP-10664. TestNetUtils.testNormalizeHostName fails. Contributed by Aaron T. 
Myers. (atm: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1601478)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java


> TestNetUtils.testNormalizeHostName fails
> 
>
> Key: HADOOP-10664
> URL: https://issues.apache.org/jira/browse/HADOOP-10664
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Chen He
>Assignee: Aaron T. Myers
>  Labels: test
> Fix For: 2.5.0
>
> Attachments: HADOOP-10664.patch
>
>
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertFalse(Assert.java:64)
>   at org.junit.Assert.assertFalse(Assert.java:74)
>   at 
> org.apache.hadoop.net.TestNetUtils.testNormalizeHostName(TestNetUtils.java:617)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9099) NetUtils.normalizeHostName fails on domains where UnknownHost resolves to an IP address

2014-06-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14026454#comment-14026454
 ] 

Hudson commented on HADOOP-9099:


FAILURE: Integrated in Hadoop-Hdfs-trunk #1770 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1770/])
Moving CHANGES.txt entry for HADOOP-9099 to the correct section. (atm: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1601482)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


> NetUtils.normalizeHostName fails on domains where UnknownHost resolves to an 
> IP address
> ---
>
> Key: HADOOP-9099
> URL: https://issues.apache.org/jira/browse/HADOOP-9099
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 1-win
>Reporter: Ivan Mitic
>Assignee: Ivan Mitic
>Priority: Minor
> Fix For: 1.2.0, 1-win, 2.5.0
>
> Attachments: HADOOP-9099.branch-1-win.patch, HADOOP-9099.trunk.patch
>
>
> I just hit this failure. We should use some more unique string for 
> "UnknownHost":
> Testcase: testNormalizeHostName took 0.007 sec
>   FAILED
> expected:<[65.53.5.181]> but was:<[UnknownHost]>
> junit.framework.AssertionFailedError: expected:<[65.53.5.181]> but 
> was:<[UnknownHost]>
>   at 
> org.apache.hadoop.net.TestNetUtils.testNormalizeHostName(TestNetUtils.java:347)
> Will post a patch in a bit.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10675) Add server-side encryption functionality to s3a

2014-06-10 Thread David S. Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14026435#comment-14026435
 ] 

David S. Wang commented on HADOOP-10675:


I'll submit this once HADOOP-10400 is committed, as otherwise this patch will 
not apply since it is based on top of HADOOP-10400.

> Add server-side encryption functionality to s3a
> ---
>
> Key: HADOOP-10675
> URL: https://issues.apache.org/jira/browse/HADOOP-10675
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.4.0
>Reporter: David S. Wang
>Assignee: David S. Wang
> Attachments: HADOOP-10675-1.patch
>
>
> The current patch for s3a in HADOOP-10400 does not have the capability to 
> specify server-side encryption.  This JIRA will track the addition of such 
> functionality to HADOOP-10400, similar to what was done in HADOOP-10568 for 
> s3n.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10675) Add server-side encryption functionality to s3a

2014-06-10 Thread David S. Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David S. Wang updated HADOOP-10675:
---

Attachment: HADOOP-10675-1.patch

> Add server-side encryption functionality to s3a
> ---
>
> Key: HADOOP-10675
> URL: https://issues.apache.org/jira/browse/HADOOP-10675
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.4.0
>Reporter: David S. Wang
>Assignee: David S. Wang
> Attachments: HADOOP-10675-1.patch
>
>
> The current patch for s3a in HADOOP-10400 does not have the capability to 
> specify server-side encryption.  This JIRA will track the addition of such 
> functionality to HADOOP-10400, similar to what was done in HADOOP-10568 for 
> s3n.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10675) Add server-side encryption functionality to s3a

2014-06-10 Thread David S. Wang (JIRA)
David S. Wang created HADOOP-10675:
--

 Summary: Add server-side encryption functionality to s3a
 Key: HADOOP-10675
 URL: https://issues.apache.org/jira/browse/HADOOP-10675
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Affects Versions: 2.4.0
Reporter: David S. Wang
Assignee: David S. Wang


The current patch for s3a in HADOOP-10400 does not have the capability to 
specify server-side encryption.  This JIRA will track the addition of such 
functionality to HADOOP-10400, similar to what was done in HADOOP-10568 for s3n.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10664) TestNetUtils.testNormalizeHostName fails

2014-06-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14026341#comment-14026341
 ] 

Hudson commented on HADOOP-10664:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #579 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/579/])
HADOOP-10664. TestNetUtils.testNormalizeHostName fails. Contributed by Aaron T. 
Myers. (atm: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1601478)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java


> TestNetUtils.testNormalizeHostName fails
> 
>
> Key: HADOOP-10664
> URL: https://issues.apache.org/jira/browse/HADOOP-10664
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Chen He
>Assignee: Aaron T. Myers
>  Labels: test
> Fix For: 2.5.0
>
> Attachments: HADOOP-10664.patch
>
>
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertFalse(Assert.java:64)
>   at org.junit.Assert.assertFalse(Assert.java:74)
>   at 
> org.apache.hadoop.net.TestNetUtils.testNormalizeHostName(TestNetUtils.java:617)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9099) NetUtils.normalizeHostName fails on domains where UnknownHost resolves to an IP address

2014-06-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14026337#comment-14026337
 ] 

Hudson commented on HADOOP-9099:


FAILURE: Integrated in Hadoop-Yarn-trunk #579 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/579/])
Moving CHANGES.txt entry for HADOOP-9099 to the correct section. (atm: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1601482)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


> NetUtils.normalizeHostName fails on domains where UnknownHost resolves to an 
> IP address
> ---
>
> Key: HADOOP-9099
> URL: https://issues.apache.org/jira/browse/HADOOP-9099
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 1-win
>Reporter: Ivan Mitic
>Assignee: Ivan Mitic
>Priority: Minor
> Fix For: 1.2.0, 1-win, 2.5.0
>
> Attachments: HADOOP-9099.branch-1-win.patch, HADOOP-9099.trunk.patch
>
>
> I just hit this failure. We should use some more unique string for 
> "UnknownHost":
> Testcase: testNormalizeHostName took 0.007 sec
>   FAILED
> expected:<[65.53.5.181]> but was:<[UnknownHost]>
> junit.framework.AssertionFailedError: expected:<[65.53.5.181]> but 
> was:<[UnknownHost]>
>   at 
> org.apache.hadoop.net.TestNetUtils.testNormalizeHostName(TestNetUtils.java:347)
> Will post a patch in a bit.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10376) Refactor refresh*Protocols into a single generic refreshConfigProtocol

2014-06-10 Thread Zesheng Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14026247#comment-14026247
 ] 

Zesheng Wu commented on HADOOP-10376:
-

Hi Chris,
The proposal and patch both look great to me, and fix my doubt of why namenode 
having so many refresh*Protocols.
One more minor suggestion: should we mark the old refresh* functions as 
deprecated?

> Refactor refresh*Protocols into a single generic refreshConfigProtocol
> --
>
> Key: HADOOP-10376
> URL: https://issues.apache.org/jira/browse/HADOOP-10376
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Chris Li
>Assignee: Chris Li
>Priority: Minor
> Attachments: HADOOP-10376.patch, HADOOP-10376.patch, 
> HADOOP-10376.patch, RefreshFrameworkProposal.pdf
>
>
> See https://issues.apache.org/jira/browse/HADOOP-10285
> There are starting to be too many refresh*Protocols We can refactor them to 
> use a single protocol with a variable payload to choose what to do.
> Thereafter, we can return an indication of success or failure.



--
This message was sent by Atlassian JIRA
(v6.2#6252)