[jira] [Commented] (HADOOP-8712) Change default hadoop.security.group.mapping

2013-01-21 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13559455#comment-13559455
 ] 

Chris Nauroth commented on HADOOP-8712:
---

This change breaks on Windows (branch-trunk-win) due to lack of a Windows 
implementation for the native method in hadoop.dll.  The fallback logic is a 
one-time check to see if hadoop.dll loaded successfully, so with this kind of 
failure, it won't fall back to {{ShellBasedUnixGroupsMapping}}.  I've filed 
HADOOP-9232 to track it.  Meanwhile, a workaround on Windows is to set the 
configuration back to {{ShellBasedUnixGroupsMapping}} manually in core-site.xml:

{code}

  hadoop.security.group.mapping
  
org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback

{code}


> Change default hadoop.security.group.mapping
> 
>
> Key: HADOOP-8712
> URL: https://issues.apache.org/jira/browse/HADOOP-8712
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.0.2-alpha
>Reporter: Robert Parker
>Assignee: Robert Parker
>Priority: Minor
> Fix For: 3.0.0, 2.0.3-alpha
>
> Attachments: HADOOP-8712-v1.patch, HADOOP-8712-v2.patch
>
>
> Change the hadoop.security.group.mapping in core-site to 
> JniBasedUnixGroupsNetgroupMappingWithFallback

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9232) JniBasedUnixGroupsMappingWithFallback fails on Windows with UnsatisfiedLinkError

2013-01-21 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13559448#comment-13559448
 ] 

Chris Nauroth commented on HADOOP-9232:
---

HADOOP-8712 will change the default hadoop.security.group.mapping to 
{{JniBasedUnixGroupsMappingWithFallback}}.  This will break on Windows.  A 
workaround would be to manually configure hadoop.security.group.mapping back to 
{{ShellBasedUnixGroupsMapping}}.

We can fix the problem by providing a proper implementation of the method on 
Windows in hadoop.dll.  There is already similar logic in the winutils.exe 
groups command.

To see the problem, start a NameNode and DataNode with hadoop.dll on the path 
and core-site.xml containing:

{code}

  hadoop.security.group.mapping
  
org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback

{code}

When the DataNode connects to the NameNode, you'll see this stack trace in the 
NameNode log:

{noformat}
13/01/21 23:19:26 WARN ipc.Server: IPC Server handler 0 on 19000, call 
org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol.versionRequest from 
127.0.0.1:55352: error: java.lang.UnsatisfiedLinkError: 
org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Ljava/lang/String;)[Ljava/lang/String;
java.lang.UnsatisfiedLinkError: 
org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Ljava/lang/String;)[Ljava/lang/String;
at 
org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Native 
Method)
at 
org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroups(JniBasedUnixGroupsMapping.java:58)
at 
org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50)
at org.apache.hadoop.security.Groups.getGroups(Groups.java:89)
at 
org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1311)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.(FSPermissionChecker.java:51)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkSuperuserPrivilege(FSPermissionChecker.java:72)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkSuperuserPrivilege(FSNamesystem.java:4591)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.versionRequest(NameNodeRpcServer.java:962)
at 
org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.versionRequest(DatanodeProtocolServerSideTranslatorPB.java:203)
at 
org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18305)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:474)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1018)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1778)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1774)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1450)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1772)
{noformat}


> JniBasedUnixGroupsMappingWithFallback fails on Windows with 
> UnsatisfiedLinkError
> 
>
> Key: HADOOP-9232
> URL: https://issues.apache.org/jira/browse/HADOOP-9232
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native, security
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>
> {{JniBasedUnixGroupsMapping}} calls native code which isn't implemented 
> properly for Windows, causing {{UnsatisfiedLinkError}}.  The fallback logic 
> in {{JniBasedUnixGroupsMappingWithFallback}} works by checking if the native 
> code is loaded during startup.  In this case, hadoop.dll is present and 
> loaded, but it doesn't contain the right code.  There will be no attempt to 
> fallback to {{ShellBasedUnixGroupsMapping}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9232) JniBasedUnixGroupsMappingWithFallback fails on Windows with UnsatisfiedLinkError

2013-01-21 Thread Chris Nauroth (JIRA)
Chris Nauroth created HADOOP-9232:
-

 Summary: JniBasedUnixGroupsMappingWithFallback fails on Windows 
with UnsatisfiedLinkError
 Key: HADOOP-9232
 URL: https://issues.apache.org/jira/browse/HADOOP-9232
 Project: Hadoop Common
  Issue Type: Bug
  Components: native, security
Affects Versions: trunk-win
Reporter: Chris Nauroth


{{JniBasedUnixGroupsMapping}} calls native code which isn't implemented 
properly for Windows, causing {{UnsatisfiedLinkError}}.  The fallback logic in 
{{JniBasedUnixGroupsMappingWithFallback}} works by checking if the native code 
is loaded during startup.  In this case, hadoop.dll is present and loaded, but 
it doesn't contain the right code.  There will be no attempt to fallback to 
{{ShellBasedUnixGroupsMapping}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9231) Parametrize staging URL for the uniformity of distributionManagement

2013-01-21 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13559408#comment-13559408
 ] 

Konstantin Boudnik commented on HADOOP-9231:


Clearly, not test is needed for the patch.

> Parametrize staging URL for the uniformity of distributionManagement
> 
>
> Key: HADOOP-9231
> URL: https://issues.apache.org/jira/browse/HADOOP-9231
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Attachments: HADOOP-9231.patch
>
>
> The build's {{distributionManagement}} section currently uses parametrization 
> for the snapshot repository. It is convenient and allows to override the 
> value from a developer's custom profile.
> The same isn't available for release artifacts to make the parametrization 
> symmetric for both types.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8990) Some minor issus in protobuf based ipc

2013-01-21 Thread Binglin Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13559402#comment-13559402
 ] 

Binglin Chang commented on HADOOP-8990:
---

Sorry, I made a mistake, looks like currently RPC response doesn't have 4 byte 
total length, and you plan to add 4 byte total length to rpc response right? If 
that's the case, I think you are right, it is fine for non-blocking IO, 
although the prefix in rpc response body seams redundant. 
And since the total length is not known until response header and response body 
are serialized to some pre-allocated buffer to get the serialized size, so you 
plan to serialize header and body to some buffer first and then write to 
socket, IMO this is the same as using a single protobuf to include both header 
and body, 


> Some minor issus in protobuf based ipc
> --
>
> Key: HADOOP-8990
> URL: https://issues.apache.org/jira/browse/HADOOP-8990
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Binglin Chang
>Priority: Minor
>
> 1. proto file naming
> RpcPayloadHeader.proto include not only RpcPayLoadHeaderProto, but also 
> RpcResponseHeaderProto, which is irrelevant to the file name.
> hadoop_rpc.proto only include HadoopRpcRequestProto, and the filename 
> "hadoop_rpc" is strange comparing to other .proto file names.
> How about merge those two file into HadoopRpc.proto?
> 2. proto class naming
> In rpc request RpcPayloadHeaderProto includes callId, but in rpc response 
> callId is included in RpcResponseHeaderProto, and there is also 
> HadoopRpcRequestProto, this is just too confusing.
> 3. The rpc system is not fully protobuf based, there are still some Writables:
> RpcRequestWritable and RpcResponseWritable.
> rpc response exception name and stack trace string.
> And RpcRequestWritable uses protobuf style varint32 prefix, but 
> RpcResponseWritable uses int32 prefix, why this inconsistency?
> Currently rpc request is splitted into length, PayLoadHeader and PayLoad, and 
> response into RpcResponseHeader, response and error message. 
> I think wrap request and response into single RequstProto and ResponseProto 
> is better, cause this gives a formal complete wire format definition, 
> or developer need to look into the source code and hard coding the 
> communication format.
> These issues above make it very confusing and hard for developers to use 
> these rpc interfaces.
> Some of these issues can be solved without breaking compatibility, but some 
> can not, but at least we need to know what will be changed and what will stay 
> stable?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9231) Parametrize staging URL for the uniformity of distributionManagement

2013-01-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13559399#comment-13559399
 ] 

Hadoop QA commented on HADOOP-9231:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12565904/HADOOP-9231.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2076//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2076//console

This message is automatically generated.

> Parametrize staging URL for the uniformity of distributionManagement
> 
>
> Key: HADOOP-9231
> URL: https://issues.apache.org/jira/browse/HADOOP-9231
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Attachments: HADOOP-9231.patch
>
>
> The build's {{distributionManagement}} section currently uses parametrization 
> for the snapshot repository. It is convenient and allows to override the 
> value from a developer's custom profile.
> The same isn't available for release artifacts to make the parametrization 
> symmetric for both types.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8924) Add maven plugin alternative to shell script to save package-info.java

2013-01-21 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13559390#comment-13559390
 ] 

Chris Nauroth commented on HADOOP-8924:
---

Hi, Colin.  Sorry to hear that this patch caused trouble in your dev 
environment.  There had been a related patch, HADOOP-9202, to change 
test-patch.sh to run mvn install before checking mvn eclipse:eclipse.  My 
understanding is that the established process is to run mvn install first to 
guarantee that everything is deployed into your local Maven repository, and 
then run mvn eclipse:eclipse.  That's based on the instructions shown here:

http://wiki.apache.org/hadoop/EclipseEnvironment

{quote}
>From this directory you just 'cd'-ed into (Which is also known as the 
>top-level directory of a branch or a trunk checkout), perform:

$ mvn install -DskipTests
$ mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs=true
{quote}

Does running an mvn install first fix the issue for you?  I just tested that 
this works in my environment by running:

{code}
rm -rf ~/.m2/repository/org/apache/hadoop/hadoop-maven-plugins && mvn install 
-DskipTests && mvn eclipse:eclipse
{code}

(The rm -rf was to force removal of any cached copy of the plugin that I might 
have in my local repository.)


> Add maven plugin alternative to shell script to save package-info.java
> --
>
> Key: HADOOP-8924
> URL: https://issues.apache.org/jira/browse/HADOOP-8924
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: trunk-win
>
> Attachments: HADOOP-8924.2.patch, HADOOP-8924.3.patch, 
> HADOOP-8924.3.patch, HADOOP-8924.4.patch, HADOOP-8924.5.patch, 
> HADOOP-8924.6.patch, HADOOP-8924.7.patch, HADOOP-8924-branch-2.7.patch, 
> HADOOP-8924-branch-trunk-win.2.patch, HADOOP-8924-branch-trunk-win.3.patch, 
> HADOOP-8924-branch-trunk-win.4.patch, HADOOP-8924-branch-trunk-win.5.patch, 
> HADOOP-8924-branch-trunk-win.6.patch, HADOOP-8924-branch-trunk-win.7.patch, 
> HADOOP-8924-branch-trunk-win.patch, HADOOP-8924.patch
>
>
> Currently, the build process relies on saveVersion.sh to generate 
> package-info.java with a version annotation.  The sh binary may not be 
> available on all developers' machines (e.g. Windows without Cygwin). This 
> issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9231) Parametrize staging URL for the uniformity of distributionManagement

2013-01-21 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-9231:
---

Attachment: HADOOP-9231.patch

A short fix.

> Parametrize staging URL for the uniformity of distributionManagement
> 
>
> Key: HADOOP-9231
> URL: https://issues.apache.org/jira/browse/HADOOP-9231
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Attachments: HADOOP-9231.patch
>
>
> The build's {{distributionManagement}} section currently uses parametrization 
> for the snapshot repository. It is convenient and allows to override the 
> value from a developer's custom profile.
> The same isn't available for release artifacts to make the parametrization 
> symmetric for both types.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9231) Parametrize staging URL for the uniformity of distributionManagement

2013-01-21 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-9231:
---

Status: Patch Available  (was: Open)

A simple fix.

> Parametrize staging URL for the uniformity of distributionManagement
> 
>
> Key: HADOOP-9231
> URL: https://issues.apache.org/jira/browse/HADOOP-9231
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Attachments: HADOOP-9231.patch
>
>
> The build's {{distributionManagement}} section currently uses parametrization 
> for the snapshot repository. It is convenient and allows to override the 
> value from a developer's custom profile.
> The same isn't available for release artifacts to make the parametrization 
> symmetric for both types.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9231) Parametrize staging URL for the uniformity of distributionManagement

2013-01-21 Thread Konstantin Boudnik (JIRA)
Konstantin Boudnik created HADOOP-9231:
--

 Summary: Parametrize staging URL for the uniformity of 
distributionManagement
 Key: HADOOP-9231
 URL: https://issues.apache.org/jira/browse/HADOOP-9231
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Konstantin Boudnik
Assignee: Konstantin Boudnik


The build's {{distributionManagement}} section currently uses parametrization 
for the snapshot repository. It is convenient and allows to override the value 
from a developer's custom profile.

The same isn't available for release artifacts to make the parametrization 
symmetric for both types.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9112) test-patch should -1 for @Tests without a timeout

2013-01-21 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13559337#comment-13559337
 ] 

Todd Lipcon commented on HADOOP-9112:
-

I think you're probably better off using the Java annotation processing tool 
(apt): http://docs.oracle.com/javase/6/docs/technotes/guides/apt/index.html

> test-patch should -1 for @Tests without a timeout
> -
>
> Key: HADOOP-9112
> URL: https://issues.apache.org/jira/browse/HADOOP-9112
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Todd Lipcon
>
> With our current test running infrastructure, if a test with no timeout set 
> runs too long, it triggers a surefire-wide timeout, which for some reason 
> doesn't show up as a failed test in the test-patch output. Given that, we 
> should require that all tests have a timeout set, and have test-patch enforce 
> this with a simple check

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9150) Unnecessary DNS resolution attempts for logical URIs

2013-01-21 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13559326#comment-13559326
 ] 

Todd Lipcon commented on HADOOP-9150:
-

Daryn, can you take a look at this latest patch rev? We've seen users have a 
big perf impact due to this bug when their DNS infrastructure isn't well set up 
with nscd, etc. Would like to get it fixed ASAP. Thanks!

> Unnecessary DNS resolution attempts for logical URIs
> 
>
> Key: HADOOP-9150
> URL: https://issues.apache.org/jira/browse/HADOOP-9150
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, ha, performance, viewfs
>Affects Versions: 3.0.0, 2.0.2-alpha
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Critical
> Attachments: hadoop-9150.txt, hadoop-9150.txt, hadoop-9150.txt, 
> hadoop-9150.txt, hadoop-9150.txt, hadoop-9150.txt, hadoop-9150.txt, log.txt, 
> tracing-resolver.tgz
>
>
> In the FileSystem code, we accidentally try to DNS-resolve the logical name 
> before it is converted to an actual domain name. In some DNS setups, this can 
> cause a big slowdown - eg in one misconfigured cluster we saw a 2-3x drop in 
> terasort throughput, since every task wasted a lot of time waiting for slow 
> "not found" responses from DNS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8924) Add maven plugin alternative to shell script to save package-info.java

2013-01-21 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13559313#comment-13559313
 ] 

Colin Patrick McCabe commented on HADOOP-8924:
--

It seems like this broke {{mvn eclipse:eclipse}}.

I get this output:
{code}
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 6.478s
[INFO] Finished at: Mon Jan 21 18:30:18 PST 2013
[INFO] Final Memory: 33M/661M
[INFO] 
[ERROR] Plugin org.apache.hadoop:hadoop-maven-plugins:3.0.0-SNAPSHOT or one of 
its dependencies could not beresolved: Could not find artifact 
org.apache.hadoop:hadoop-maven-plugins:jar:3.0.0-SNAPSHOT -> [Help 1]
org.apache.maven.plugin.PluginResolutionException: Plugin 
org.apache.hadoop:hadoop-maven-plugins:3.0.0-SNAPSHOT or  one of its 
dependencies could not be resolved: Could not find artifact 
org.apache.hadoop:hadoop-maven-plugins:jar:3.0.0-SNAPSHOT
at 
org.apache.maven.plugin.internal.DefaultPluginDependenciesResolver.resolve(DefaultPluginDependenciesResolver.java:140)
at 
org.apache.maven.plugin.internal.DefaultMavenPluginManager.getPluginDescriptor(DefaultMavenPluginManager.
java:142)
at 
org.apache.maven.plugin.internal.DefaultMavenPluginManager.getMojoDescriptor(DefaultMavenPluginManager.java:
 261)
at 
org.apache.maven.plugin.DefaultBuildPluginManager.getMojoDescriptor(DefaultBuildPluginManager.java:185)
at 
org.apache.maven.lifecycle.internal.DefaultLifecycleExecutionPlanCalculator.
 
calculateLifecycleMappings(DefaultLifecycleExecutionPlanCalculator.java:280)
at 
org.apache.maven.lifecycle.internal.DefaultLifecycleExecutionPlanCalculator.
 
calculateForkedLifecycle(DefaultLifecycleExecutionPlanCalculator.java:520)
{code}

The underlying issue seems to be that {{generate-resources}} is broken, and 
maven-eclipse-plugin "Invokes the execution of the lifecycle phase 
generate-resources prior to executing itself."  (As described here: 
http://maven.apache.org/plugins/maven-eclipse-plugin/eclipse-mojo.html )

You can see that this command also fails with the same error: {{mvn 
generate-resources}}.  In fact, a lot of the early phases of the default 
lifecycle seem to fail: {{validate}}, {{initialize}}, {{generate-sources}}.  
For some reason {{mvn site}} works, though, despite the fact that it should be 
implicitly calling those earlier build phases.

> Add maven plugin alternative to shell script to save package-info.java
> --
>
> Key: HADOOP-8924
> URL: https://issues.apache.org/jira/browse/HADOOP-8924
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: trunk-win
>
> Attachments: HADOOP-8924.2.patch, HADOOP-8924.3.patch, 
> HADOOP-8924.3.patch, HADOOP-8924.4.patch, HADOOP-8924.5.patch, 
> HADOOP-8924.6.patch, HADOOP-8924.7.patch, HADOOP-8924-branch-2.7.patch, 
> HADOOP-8924-branch-trunk-win.2.patch, HADOOP-8924-branch-trunk-win.3.patch, 
> HADOOP-8924-branch-trunk-win.4.patch, HADOOP-8924-branch-trunk-win.5.patch, 
> HADOOP-8924-branch-trunk-win.6.patch, HADOOP-8924-branch-trunk-win.7.patch, 
> HADOOP-8924-branch-trunk-win.patch, HADOOP-8924.patch
>
>
> Currently, the build process relies on saveVersion.sh to generate 
> package-info.java with a version annotation.  The sh binary may not be 
> available on all developers' machines (e.g. Windows without Cygwin). This 
> issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9112) test-patch should -1 for @Tests without a timeout

2013-01-21 Thread Surenkumar Nihalani (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13559303#comment-13559303
 ] 

Surenkumar Nihalani commented on HADOOP-9112:
-

while adding default timeout sounds like a patch but not the solution. I was 
looking at reflection's API. It's good enough for us to check Timeout for each 
Test annotation (assuming the test follows JUnit 4 format) however for it to 
work we need a class object to start with. There are two ways to get that
# From an instance, like, {{instance.getClass()}}
# From the fully qualified name String, like: {{Class c = 
Class.forName("com.duke.MyLocaleServiceProvider");}}

Getting an instance of the instance of the test object seems to be tough. 
Using the fully qualified name seems to be a good approach to do this. Now, 
from a given file path, there is no clean way to get to fully qualified name. I 
think grepping for "package *;" and "public class [w]+ " and concatenating 
results and passing it down to a java program as an argument should be do-able.

After I have a reference to a class object, it's easy to check if all the 
methods have annotations of junit.Test instance have a timeout variable.

Thoughts?

> test-patch should -1 for @Tests without a timeout
> -
>
> Key: HADOOP-9112
> URL: https://issues.apache.org/jira/browse/HADOOP-9112
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Todd Lipcon
>
> With our current test running infrastructure, if a test with no timeout set 
> runs too long, it triggers a surefire-wide timeout, which for some reason 
> doesn't show up as a failed test in the test-patch output. Given that, we 
> should require that all tests have a timeout set, and have test-patch enforce 
> this with a simple check

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8990) Some minor issus in protobuf based ipc

2013-01-21 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13559270#comment-13559270
 ] 

Sanjay Radia commented on HADOOP-8990:
--

Won't the first 4byte length size (ie total len) be sufficient to allow reading 
the data in a non-blocking fashion. 

> Some minor issus in protobuf based ipc
> --
>
> Key: HADOOP-8990
> URL: https://issues.apache.org/jira/browse/HADOOP-8990
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Binglin Chang
>Priority: Minor
>
> 1. proto file naming
> RpcPayloadHeader.proto include not only RpcPayLoadHeaderProto, but also 
> RpcResponseHeaderProto, which is irrelevant to the file name.
> hadoop_rpc.proto only include HadoopRpcRequestProto, and the filename 
> "hadoop_rpc" is strange comparing to other .proto file names.
> How about merge those two file into HadoopRpc.proto?
> 2. proto class naming
> In rpc request RpcPayloadHeaderProto includes callId, but in rpc response 
> callId is included in RpcResponseHeaderProto, and there is also 
> HadoopRpcRequestProto, this is just too confusing.
> 3. The rpc system is not fully protobuf based, there are still some Writables:
> RpcRequestWritable and RpcResponseWritable.
> rpc response exception name and stack trace string.
> And RpcRequestWritable uses protobuf style varint32 prefix, but 
> RpcResponseWritable uses int32 prefix, why this inconsistency?
> Currently rpc request is splitted into length, PayLoadHeader and PayLoad, and 
> response into RpcResponseHeader, response and error message. 
> I think wrap request and response into single RequstProto and ResponseProto 
> is better, cause this gives a formal complete wire format definition, 
> or developer need to look into the source code and hard coding the 
> communication format.
> These issues above make it very confusing and hard for developers to use 
> these rpc interfaces.
> Some of these issues can be solved without breaking compatibility, but some 
> can not, but at least we need to know what will be changed and what will stay 
> stable?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8990) Some minor issus in protobuf based ipc

2013-01-21 Thread Binglin Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13559262#comment-13559262
 ] 

Binglin Chang commented on HADOOP-8990:
---

bq. Note that since rpc has pluggable rpc engine, the rpc request itself is 
serialized based on the pluggable rpc engine. For protobuf-rpc-engine the 
length is serialized as varint.
I don't understand why protobuf-rpc-engine has to be varint prefixd? I mean 
originally it is 4 byte int prefixed, and HADOOP-8084 changed it to varint 
prefixed, and then at some time later RpcResponseWritable changed to using 
varint, and RpcRequestWritable keep using 4 byte int.
I just think 4 byte int is a better choice, because it easy to implement in 
non-blocking code. 
Just for example, see Todd's ipc implementation:
https://github.com/toddlipcon/libhrpc/blob/master/rpc_client.cpp
{code}
  uint32_t length;
  CodedInputStream cis(&rbuf_[rbuf_consumed_], rbuf_available());
  bool success = cis.ReadVarint32(&length);
  if (!success) {
if (rbuf_available() >= kMaxVarint32Size) {
  // TODO: error handling
  LOG(FATAL) << "bad varint";
}
// Not enough bytes in buffer to read varint, ask for at least another byte
EnsureAvailableLength(rbuf_available() + 1,
boost::bind(&RpcClient::ReadResponseHeaderLengthCallback, this,
  asio::placeholders::error));
return;
  }
{code}
There are lot of retry logic to read varint in non-blocking code, it is much 
easier if it's fixed 4 byte, and hence unified. 




> Some minor issus in protobuf based ipc
> --
>
> Key: HADOOP-8990
> URL: https://issues.apache.org/jira/browse/HADOOP-8990
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Binglin Chang
>Priority: Minor
>
> 1. proto file naming
> RpcPayloadHeader.proto include not only RpcPayLoadHeaderProto, but also 
> RpcResponseHeaderProto, which is irrelevant to the file name.
> hadoop_rpc.proto only include HadoopRpcRequestProto, and the filename 
> "hadoop_rpc" is strange comparing to other .proto file names.
> How about merge those two file into HadoopRpc.proto?
> 2. proto class naming
> In rpc request RpcPayloadHeaderProto includes callId, but in rpc response 
> callId is included in RpcResponseHeaderProto, and there is also 
> HadoopRpcRequestProto, this is just too confusing.
> 3. The rpc system is not fully protobuf based, there are still some Writables:
> RpcRequestWritable and RpcResponseWritable.
> rpc response exception name and stack trace string.
> And RpcRequestWritable uses protobuf style varint32 prefix, but 
> RpcResponseWritable uses int32 prefix, why this inconsistency?
> Currently rpc request is splitted into length, PayLoadHeader and PayLoad, and 
> response into RpcResponseHeader, response and error message. 
> I think wrap request and response into single RequstProto and ResponseProto 
> is better, cause this gives a formal complete wire format definition, 
> or developer need to look into the source code and hard coding the 
> communication format.
> These issues above make it very confusing and hard for developers to use 
> these rpc interfaces.
> Some of these issues can be solved without breaking compatibility, but some 
> can not, but at least we need to know what will be changed and what will stay 
> stable?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9220) Unnecessary transition to standby in ActiveStandbyElector

2013-01-21 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13559135#comment-13559135
 ] 

Tom White commented on HADOOP-9220:
---

It's true that the elector checks for a stale ZK client, but that doesn't 
prevent the problem here which is caused by i) having multiple watchers for the 
ZK client (due to the creation of a new watcher in monitorLockNodeAsync), and 
ii) a postponed call to recheckElectability unnecessarily forcing a new 
election (this call doesn't go through the watcher).

> Unnecessary transition to standby in ActiveStandbyElector
> -
>
> Key: HADOOP-9220
> URL: https://issues.apache.org/jira/browse/HADOOP-9220
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ha
>Reporter: Tom White
>Assignee: Tom White
> Attachments: HADOOP-9220.patch, HADOOP-9220.patch
>
>
> When performing a manual failover from one HA node to a second, under some 
> circumstances the second node will transition from standby -> active -> 
> standby -> active. This is with automatic failover enabled, so there is a ZK 
> cluster doing leader election.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9177) Address issues that came out from running static code analysis on winutils

2013-01-21 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-9177:


   Resolution: Fixed
Fix Version/s: 1-win
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

+1 for the patch. Committed it to branch-1-win.

Thank you Ivan.

> Address issues that came out from running static code analysis on winutils
> --
>
> Key: HADOOP-9177
> URL: https://issues.apache.org/jira/browse/HADOOP-9177
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1-win
>Reporter: Ivan Mitic
>Assignee: Ivan Mitic
> Fix For: 1-win
>
> Attachments: HADOOP-9177.branch-1-win.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9177) Address issues that came out from running static code analysis on winutils

2013-01-21 Thread Chuan Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13559115#comment-13559115
 ] 

Chuan Liu commented on HADOOP-9177:
---

+1 

Change looks good. Static code analysis could help us elimite potential 
vulnerable code in the future. The annotations also make the code more robust.

> Address issues that came out from running static code analysis on winutils
> --
>
> Key: HADOOP-9177
> URL: https://issues.apache.org/jira/browse/HADOOP-9177
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1-win
>Reporter: Ivan Mitic
>Assignee: Ivan Mitic
> Attachments: HADOOP-9177.branch-1-win.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9177) Address issues that came out from running static code analysis on winutils

2013-01-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13559094#comment-13559094
 ] 

Hadoop QA commented on HADOOP-9177:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12565847/HADOOP-9177.branch-1-win.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2075//console

This message is automatically generated.

> Address issues that came out from running static code analysis on winutils
> --
>
> Key: HADOOP-9177
> URL: https://issues.apache.org/jira/browse/HADOOP-9177
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1-win
>Reporter: Ivan Mitic
>Assignee: Ivan Mitic
> Attachments: HADOOP-9177.branch-1-win.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-8516) fsck command does not work when executed on Windows Hadoop installation

2013-01-21 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic resolved HADOOP-8516.


Resolution: Cannot Reproduce

This is fixed in branch-1-win along the way. Resolving the issue as cannot 
reproduce to avoid accumulating Jiras.

> fsck command does not work when executed on Windows Hadoop installation
> ---
>
> Key: HADOOP-8516
> URL: https://issues.apache.org/jira/browse/HADOOP-8516
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Trupti Dhavle
>
> I tried to run following command on Windows Hadoop installation
> hadoop fsck /tmp
> THis command was run as Administrator. 
> The command fails with following error-
> 12/06/20 00:24:55 ERROR security.UserGroupInformation: 
> PriviledgedActionExceptio
> n as:Administrator cause:java.net.ConnectException: Connection refused: 
> connect
> Exception in thread "main" java.net.ConnectException: Connection refused: 
> connec
> t
> at java.net.PlainSocketImpl.socketConnect(Native Method)
> at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351)
> at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:211)
> at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:200)
> at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)
> at java.net.Socket.connect(Socket.java:529)
> at java.net.Socket.connect(Socket.java:478)
> at sun.net.NetworkClient.doConnect(NetworkClient.java:163)
> at sun.net.www.http.HttpClient.openServer(HttpClient.java:394)
> at sun.net.www.http.HttpClient.openServer(HttpClient.java:529)
> at sun.net.www.http.HttpClient.(HttpClient.java:233)
> at sun.net.www.http.HttpClient.New(HttpClient.java:306)
> at sun.net.www.http.HttpClient.New(HttpClient.java:323)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLC
> onnection.java:970)
> at 
> sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConne
> ction.java:911)
> at 
> sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection
> .java:836)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLCon
> nection.java:1172)
> at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:141)
> at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:110)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInforma
> tion.java:1103)
> at org.apache.hadoop.hdfs.tools.DFSck.run(DFSck.java:110)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
> at org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:182)
> /tmp is owned by Administrator
> hadoop fs -ls /
> Found 3 items
> drwxr-xr-x   - Administrator supergroup  0 2012-06-08 15:08 
> /benchmarks
> drwxrwxrwx   - Administrator supergroup  0 2012-06-11 23:00 /tmp
> drwxr-xr-x   - Administrator supergroup  0 2012-06-19 17:01 /user

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9177) Address issues that came out from running static code analysis on winutils

2013-01-21 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HADOOP-9177:
---

Status: Patch Available  (was: Open)

> Address issues that came out from running static code analysis on winutils
> --
>
> Key: HADOOP-9177
> URL: https://issues.apache.org/jira/browse/HADOOP-9177
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1-win
>Reporter: Ivan Mitic
>Assignee: Ivan Mitic
> Attachments: HADOOP-9177.branch-1-win.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-8517) --config option does not work with Hadoop installation on Windows

2013-01-21 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic resolved HADOOP-8517.


Resolution: Cannot Reproduce

This is fixed in branch-1-win along the way. Resolving the issue as cannot 
reproduce.

> --config option does not work with Hadoop installation on Windows
> -
>
> Key: HADOOP-8517
> URL: https://issues.apache.org/jira/browse/HADOOP-8517
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Trupti Dhavle
>
> I ran following command
> hadoop --config c:\\hadoop\conf fs -ls /
> I get following error for --config option
> Unrecognized option: --config
> Could not create the Java virtual machine.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9074) Hadoop install scripts for Windows

2013-01-21 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic resolved HADOOP-9074.


Resolution: Fixed

This is committed to branch-1-win, resolving. 

> Hadoop install scripts for Windows
> --
>
> Key: HADOOP-9074
> URL: https://issues.apache.org/jira/browse/HADOOP-9074
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1-win
>Reporter: Ivan Mitic
>Assignee: Ivan Mitic
> Attachments: HADOOP-9074.branch-1-win.installer.patch
>
>
> Tracking Jira to post Hadoop install scripts for Windows. Scripts will 
> provide means for Windows users/developers to install/uninstall Hadoop on a 
> single-node.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9177) Address issues that came out from running static code analysis on winutils

2013-01-21 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HADOOP-9177:
---

Attachment: HADOOP-9177.branch-1-win.patch

Attaching the patch.

> Address issues that came out from running static code analysis on winutils
> --
>
> Key: HADOOP-9177
> URL: https://issues.apache.org/jira/browse/HADOOP-9177
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1-win
>Reporter: Ivan Mitic
>Assignee: Ivan Mitic
> Attachments: HADOOP-9177.branch-1-win.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9220) Unnecessary transition to standby in ActiveStandbyElector

2013-01-21 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13559081#comment-13559081
 ] 

Bikas Saha commented on HADOOP-9220:


Not quite sure I understand. Todd had added a reference to the ZK client so 
that the Elector would only accept watch notifications from the last ZK client. 
That means only 1 ZK client would be driving the Elector.

> Unnecessary transition to standby in ActiveStandbyElector
> -
>
> Key: HADOOP-9220
> URL: https://issues.apache.org/jira/browse/HADOOP-9220
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ha
>Reporter: Tom White
>Assignee: Tom White
> Attachments: HADOOP-9220.patch, HADOOP-9220.patch
>
>
> When performing a manual failover from one HA node to a second, under some 
> circumstances the second node will transition from standby -> active -> 
> standby -> active. This is with automatic failover enabled, so there is a ZK 
> cluster doing leader election.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8990) Some minor issus in protobuf based ipc

2013-01-21 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13559006#comment-13559006
 ] 

Sanjay Radia commented on HADOOP-8990:
--

bq. The final wire format would be(I assume prefix is unified to 4 byte 
integer):
Yes except that on first len is 4bytes and the *rest are varint*.  Note that 
since rpc has pluggable rpc engine, the rpc request itself is serialized based 
on the pluggable rpc engine. For protobuf-rpc-engine the length is serialized 
as varint.

The individual lengths works well with the layering in the system. Wrt layering 
I mean *both*  within the ipc/rpc layers and also the fact that we allow 
multiple rpc engines.

> Some minor issus in protobuf based ipc
> --
>
> Key: HADOOP-8990
> URL: https://issues.apache.org/jira/browse/HADOOP-8990
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Binglin Chang
>Priority: Minor
>
> 1. proto file naming
> RpcPayloadHeader.proto include not only RpcPayLoadHeaderProto, but also 
> RpcResponseHeaderProto, which is irrelevant to the file name.
> hadoop_rpc.proto only include HadoopRpcRequestProto, and the filename 
> "hadoop_rpc" is strange comparing to other .proto file names.
> How about merge those two file into HadoopRpc.proto?
> 2. proto class naming
> In rpc request RpcPayloadHeaderProto includes callId, but in rpc response 
> callId is included in RpcResponseHeaderProto, and there is also 
> HadoopRpcRequestProto, this is just too confusing.
> 3. The rpc system is not fully protobuf based, there are still some Writables:
> RpcRequestWritable and RpcResponseWritable.
> rpc response exception name and stack trace string.
> And RpcRequestWritable uses protobuf style varint32 prefix, but 
> RpcResponseWritable uses int32 prefix, why this inconsistency?
> Currently rpc request is splitted into length, PayLoadHeader and PayLoad, and 
> response into RpcResponseHeader, response and error message. 
> I think wrap request and response into single RequstProto and ResponseProto 
> is better, cause this gives a formal complete wire format definition, 
> or developer need to look into the source code and hard coding the 
> communication format.
> These issues above make it very confusing and hard for developers to use 
> these rpc interfaces.
> Some of these issues can be solved without breaking compatibility, but some 
> can not, but at least we need to know what will be changed and what will stay 
> stable?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9160) Adopt JMX for management protocols

2013-01-21 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13558952#comment-13558952
 ] 

Allen Wittenauer commented on HADOOP-9160:
--

bq. The users of the protocols are sysadmins and management daemons.

As a member of this subset and as mentioned previously, I want the ability to 
turn off writes to guarantee that JMX can be used as a read-only interface.  
I'll -1 any patch that doesn't have it.

> Adopt JMX for management protocols
> --
>
> Key: HADOOP-9160
> URL: https://issues.apache.org/jira/browse/HADOOP-9160
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Luke Lu
> Attachments: hadoop-9160-demo-branch-1.txt
>
>
> Currently we use Hadoop RPC (and some HTTP, notably fsck) for admin 
> protocols. We should consider adopt JMX for future admin protocols, as it's 
> the industry standard for java server management with wide client support.
> Having an alternative/redundant RPC mechanism is very desirable for admin 
> protocols. I've seen in the past in multiple cases, where NN and/or JT RPC 
> were locked up solid due to various bugs and/or RPC thread pool exhaustion, 
> while HTTP and/or JMX worked just fine.
> Other desirable benefits include admin protocol backward compatibility and 
> introspectability, which is convenient for a centralized management system to 
> manage multiple Hadoop clusters of different versions. Another notable 
> benefit is that it's much easier to implement new admin commands in JMX 
> (especially with MXBean) than Hadoop RPC, especially in trunk (as well as 
> 0.23+ and 2.x).
> Since Hadoop RPC doesn't guarantee backward compatibility (probably not ever 
> for branch-1), there are few external tools depending on it. We can keep the 
> old protocols for as long as needed. New commands should be in JMX. The 
> transition can be gradual and backward-compatible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8924) Add maven plugin alternative to shell script to save package-info.java

2013-01-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13558778#comment-13558778
 ] 

Hudson commented on HADOOP-8924:


Integrated in Hadoop-Mapreduce-trunk #1320 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1320/])
HADOOP-8924. Add CHANGES.txt description missed in commit r1435380. 
(Revision 1436181)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1436181
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


> Add maven plugin alternative to shell script to save package-info.java
> --
>
> Key: HADOOP-8924
> URL: https://issues.apache.org/jira/browse/HADOOP-8924
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: trunk-win
>
> Attachments: HADOOP-8924.2.patch, HADOOP-8924.3.patch, 
> HADOOP-8924.3.patch, HADOOP-8924.4.patch, HADOOP-8924.5.patch, 
> HADOOP-8924.6.patch, HADOOP-8924.7.patch, HADOOP-8924-branch-2.7.patch, 
> HADOOP-8924-branch-trunk-win.2.patch, HADOOP-8924-branch-trunk-win.3.patch, 
> HADOOP-8924-branch-trunk-win.4.patch, HADOOP-8924-branch-trunk-win.5.patch, 
> HADOOP-8924-branch-trunk-win.6.patch, HADOOP-8924-branch-trunk-win.7.patch, 
> HADOOP-8924-branch-trunk-win.patch, HADOOP-8924.patch
>
>
> Currently, the build process relies on saveVersion.sh to generate 
> package-info.java with a version annotation.  The sh binary may not be 
> available on all developers' machines (e.g. Windows without Cygwin). This 
> issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8924) Add maven plugin alternative to shell script to save package-info.java

2013-01-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13558729#comment-13558729
 ] 

Hudson commented on HADOOP-8924:


Integrated in Hadoop-Hdfs-trunk #1292 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1292/])
HADOOP-8924. Add CHANGES.txt description missed in commit r1435380. 
(Revision 1436181)

 Result = FAILURE
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1436181
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


> Add maven plugin alternative to shell script to save package-info.java
> --
>
> Key: HADOOP-8924
> URL: https://issues.apache.org/jira/browse/HADOOP-8924
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: trunk-win
>
> Attachments: HADOOP-8924.2.patch, HADOOP-8924.3.patch, 
> HADOOP-8924.3.patch, HADOOP-8924.4.patch, HADOOP-8924.5.patch, 
> HADOOP-8924.6.patch, HADOOP-8924.7.patch, HADOOP-8924-branch-2.7.patch, 
> HADOOP-8924-branch-trunk-win.2.patch, HADOOP-8924-branch-trunk-win.3.patch, 
> HADOOP-8924-branch-trunk-win.4.patch, HADOOP-8924-branch-trunk-win.5.patch, 
> HADOOP-8924-branch-trunk-win.6.patch, HADOOP-8924-branch-trunk-win.7.patch, 
> HADOOP-8924-branch-trunk-win.patch, HADOOP-8924.patch
>
>
> Currently, the build process relies on saveVersion.sh to generate 
> package-info.java with a version annotation.  The sh binary may not be 
> available on all developers' machines (e.g. Windows without Cygwin). This 
> issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9223) support specify config items through system property

2013-01-21 Thread Zesheng Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13558691#comment-13558691
 ] 

Zesheng Wu commented on HADOOP-9223:


Anyone who can make sure of this?

> support specify config items through system property
> 
>
> Key: HADOOP-9223
> URL: https://issues.apache.org/jira/browse/HADOOP-9223
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 2.0.0-alpha
>Reporter: Zesheng Wu
>Priority: Minor
>  Labels: configuration, hadoop
> Attachments: HADOOP-9223.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> The current hadoop config items are mainly interpolated from the *-site.xml 
> files. In our production environment, we need a mechanism that can specify 
> config items through system properties, which is something like the gflags in 
> system built with C++, it's really very handy.
> The main purpose of this patch is to improve the convenience of hadoop 
> systems, especially when people do testing or perf tuning, which always need 
> to modify the *-site.xml files
> If this patch is applied, then people can start hadoop programs in this way: 
> java -cp $class_path -Dhadoop.property.$name=$value $program

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8924) Add maven plugin alternative to shell script to save package-info.java

2013-01-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13558681#comment-13558681
 ] 

Hudson commented on HADOOP-8924:


Integrated in Hadoop-Yarn-trunk #103 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/103/])
HADOOP-8924. Add CHANGES.txt description missed in commit r1435380. 
(Revision 1436181)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1436181
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


> Add maven plugin alternative to shell script to save package-info.java
> --
>
> Key: HADOOP-8924
> URL: https://issues.apache.org/jira/browse/HADOOP-8924
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: trunk-win
>
> Attachments: HADOOP-8924.2.patch, HADOOP-8924.3.patch, 
> HADOOP-8924.3.patch, HADOOP-8924.4.patch, HADOOP-8924.5.patch, 
> HADOOP-8924.6.patch, HADOOP-8924.7.patch, HADOOP-8924-branch-2.7.patch, 
> HADOOP-8924-branch-trunk-win.2.patch, HADOOP-8924-branch-trunk-win.3.patch, 
> HADOOP-8924-branch-trunk-win.4.patch, HADOOP-8924-branch-trunk-win.5.patch, 
> HADOOP-8924-branch-trunk-win.6.patch, HADOOP-8924-branch-trunk-win.7.patch, 
> HADOOP-8924-branch-trunk-win.patch, HADOOP-8924.patch
>
>
> Currently, the build process relies on saveVersion.sh to generate 
> package-info.java with a version annotation.  The sh binary may not be 
> available on all developers' machines (e.g. Windows without Cygwin). This 
> issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9225) Cover package org.apache.hadoop.compress.Snappy

2013-01-21 Thread Vadim Bondarev (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13558659#comment-13558659
 ] 

Vadim Bondarev commented on HADOOP-9225:


Yes, you are right. Now a do correct check of snappy support.

> Cover package org.apache.hadoop.compress.Snappy
> ---
>
> Key: HADOOP-9225
> URL: https://issues.apache.org/jira/browse/HADOOP-9225
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Vadim Bondarev
> Attachments: HADOOP-9225-branch-0.23-a.patch, 
> HADOOP-9225-branch-2-a.patch, HADOOP-9225-branch-2-b.patch, 
> HADOOP-9225-trunk-a.patch, HADOOP-9225-trunk-b.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9225) Cover package org.apache.hadoop.compress.Snappy

2013-01-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13558626#comment-13558626
 ] 

Hadoop QA commented on HADOOP-9225:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12565748/HADOOP-9225-trunk-b.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2074//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2074//console

This message is automatically generated.

> Cover package org.apache.hadoop.compress.Snappy
> ---
>
> Key: HADOOP-9225
> URL: https://issues.apache.org/jira/browse/HADOOP-9225
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Vadim Bondarev
> Attachments: HADOOP-9225-branch-0.23-a.patch, 
> HADOOP-9225-branch-2-a.patch, HADOOP-9225-branch-2-b.patch, 
> HADOOP-9225-trunk-a.patch, HADOOP-9225-trunk-b.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9225) Cover package org.apache.hadoop.compress.Snappy

2013-01-21 Thread Vadim Bondarev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Bondarev updated HADOOP-9225:
---

Attachment: HADOOP-9225-trunk-b.patch
HADOOP-9225-branch-2-b.patch

> Cover package org.apache.hadoop.compress.Snappy
> ---
>
> Key: HADOOP-9225
> URL: https://issues.apache.org/jira/browse/HADOOP-9225
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Vadim Bondarev
> Attachments: HADOOP-9225-branch-0.23-a.patch, 
> HADOOP-9225-branch-2-a.patch, HADOOP-9225-branch-2-b.patch, 
> HADOOP-9225-trunk-a.patch, HADOOP-9225-trunk-b.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira