[jira] [Commented] (HADOOP-10305) Add "rpc.metrics.quantile.enable" and "rpc.metrics.percentiles.intervals" to core-default.xml

2014-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13885113#comment-13885113
 ] 

Hadoop QA commented on HADOOP-10305:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12625805/HADOOP-10305.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3499//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3499//console

This message is automatically generated.

> Add "rpc.metrics.quantile.enable" and "rpc.metrics.percentiles.intervals" to 
> core-default.xml
> -
>
> Key: HADOOP-10305
> URL: https://issues.apache.org/jira/browse/HADOOP-10305
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
> Attachments: HADOOP-10305.patch
>
>
> "rpc.metrics.quantile.enable" and "rpc.metrics.percentiles.intervals" were 
> added in HADOOP-9420, but these two parameters are not written in 
> core-default.xml.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10305) Add "rpc.metrics.quantile.enable" and "rpc.metrics.percentiles.intervals" to core-default.xml

2014-01-28 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10305:
---

Assignee: Akira AJISAKA
  Status: Patch Available  (was: Open)

> Add "rpc.metrics.quantile.enable" and "rpc.metrics.percentiles.intervals" to 
> core-default.xml
> -
>
> Key: HADOOP-10305
> URL: https://issues.apache.org/jira/browse/HADOOP-10305
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
> Attachments: HADOOP-10305.patch
>
>
> "rpc.metrics.quantile.enable" and "rpc.metrics.percentiles.intervals" were 
> added in HADOOP-9420, but these two parameters are not written in 
> core-default.xml.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10305) Add "rpc.metrics.quantile.enable" and "rpc.metrics.percentiles.intervals" to core-default.xml

2014-01-28 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10305:
---

Attachment: HADOOP-10305.patch

Attaching a patch.

> Add "rpc.metrics.quantile.enable" and "rpc.metrics.percentiles.intervals" to 
> core-default.xml
> -
>
> Key: HADOOP-10305
> URL: https://issues.apache.org/jira/browse/HADOOP-10305
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Reporter: Akira AJISAKA
> Attachments: HADOOP-10305.patch
>
>
> "rpc.metrics.quantile.enable" and "rpc.metrics.percentiles.intervals" were 
> added in HADOOP-9420, but these two parameters are not written in 
> core-default.xml.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HADOOP-10306) Unnecessary weak reference map to cache classes in Configuration

2014-01-28 Thread Hiroshi Ikeda (JIRA)
Hiroshi Ikeda created HADOOP-10306:
--

 Summary: Unnecessary weak reference map to cache classes in 
Configuration
 Key: HADOOP-10306
 URL: https://issues.apache.org/jira/browse/HADOOP-10306
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Hiroshi Ikeda
Priority: Trivial


In Configuration.getClassByNameOrNull():
{code}
synchronized (CACHE_CLASSES) {
  map = CACHE_CLASSES.get(classLoader);
  if (map == null) {
map = Collections.synchronizedMap(
  new WeakHashMap>>());
CACHE_CLASSES.put(classLoader, map);
  }
}
{code}
Change "new WeaHashMap()" to "new HashMap" or 
something. Otherwise, even while the class is actively used, this may drop its 
class cache.





--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10305) Add "rpc.metrics.quantile.enable" and "rpc.metrics.percentiles.intervals" to core-default.xml

2014-01-28 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10305:
---

Description: "rpc.metrics.quantile.enable" and 
"rpc.metrics.percentiles.intervals" were added in HADOOP-9420, but these two 
parameters are not written in core-default.xml.  (was: 
"rpc.metrics.quantile.enable" and "rpc.metrics.percentiles.intervals" are added 
in HADOOP-9420, but these two parameters are not written in core-default.xml.)

> Add "rpc.metrics.quantile.enable" and "rpc.metrics.percentiles.intervals" to 
> core-default.xml
> -
>
> Key: HADOOP-10305
> URL: https://issues.apache.org/jira/browse/HADOOP-10305
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Reporter: Akira AJISAKA
>
> "rpc.metrics.quantile.enable" and "rpc.metrics.percentiles.intervals" were 
> added in HADOOP-9420, but these two parameters are not written in 
> core-default.xml.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HADOOP-10305) Add "rpc.metrics.quantile.enable" and "rpc.metrics.percentiles.intervals" to core-default.xml

2014-01-28 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HADOOP-10305:
--

 Summary: Add "rpc.metrics.quantile.enable" and 
"rpc.metrics.percentiles.intervals" to core-default.xml
 Key: HADOOP-10305
 URL: https://issues.apache.org/jira/browse/HADOOP-10305
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Reporter: Akira AJISAKA


"rpc.metrics.quantile.enable" and "rpc.metrics.percentiles.intervals" are added 
in HADOOP-9420, but these two parameters are not written in core-default.xml.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HADOOP-10304) Configuration should not expose its instance in constructors

2014-01-28 Thread Hiroshi Ikeda (JIRA)
Hiroshi Ikeda created HADOOP-10304:
--

 Summary: Configuration should not expose its instance in 
constructors
 Key: HADOOP-10304
 URL: https://issues.apache.org/jira/browse/HADOOP-10304
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Hiroshi Ikeda
Priority: Minor


org.apache.hadoop.conf.Configuration exposes a reference of its instance in 
constructors via its class variable REGISTRY, which means incomplete instances 
are accessible. For example addDefaultResource() may access incomplete 
instances (especially for subclasses of Configuration).

Actually, static methods in Configuration are not needed to access its 
instances, and it is enough that each instance checks modification of class 
variables. This is also useful to avoid deadlock between locking instances and 
locking the class object, which may be happened when you will resolve race 
conditions yet existing in Configuration.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HADOOP-10303) multi-supergroup supports

2014-01-28 Thread Jiqiu (JIRA)
Jiqiu created HADOOP-10303:
--

 Summary: multi-supergroup supports
 Key: HADOOP-10303
 URL: https://issues.apache.org/jira/browse/HADOOP-10303
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.2.0, 1.2.1
Reporter: Jiqiu
Priority: Minor


Most operating system supports multiple groups of administrators. 
For Hadoop, it only supports a single supergroup.
This Jira requires to enhance open source hadoop to support multiple group of 
administrators.  
for example,we have data administrators to manage HDFS (supergroup A)
and application administrators(supergroup B) to manage Mapreduce. 




--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10291) TestSecurityUtil#testSocketAddrWithIP fails

2014-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884995#comment-13884995
 ] 

Hudson commented on HADOOP-10291:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #5056 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5056/])
HADOOP-10291. TestSecurityUtil#testSocketAddrWithIP fails due to test order 
dependency. (Contributed by Mit Desai) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1562353)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestSecurityUtil.java


> TestSecurityUtil#testSocketAddrWithIP fails
> ---
>
> Key: HADOOP-10291
> URL: https://issues.apache.org/jira/browse/HADOOP-10291
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Mit Desai
>Assignee: Mit Desai
>  Labels: java7
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HADOOP-10291.patch
>
>
> testSocketAddrWithIP fails with Assertion Error
> {noformat}
> Running org.apache.hadoop.security.TestSecurityUtil
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.389 sec <<< 
> FAILURE!
> testSocketAddrWithIP(org.apache.hadoop.security.TestSecurityUtil)  Time 
> elapsed: 275 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<127.0.0.1:123> but was:
>   at org.junit.Assert.fail(Assert.java:93)
>   at org.junit.Assert.failNotEquals(Assert.java:647)
>   at org.junit.Assert.assertEquals(Assert.java:128)
>   at org.junit.Assert.assertEquals(Assert.java:147)
>   at 
> org.apache.hadoop.security.TestSecurityUtil.verifyTokenService(TestSecurityUtil.java:271)
>   at 
> org.apache.hadoop.security.TestSecurityUtil.verifyAddress(TestSecurityUtil.java:290)
>   at 
> org.apache.hadoop.security.TestSecurityUtil.verifyServiceAddr(TestSecurityUtil.java:306)
>   at 
> org.apache.hadoop.security.TestSecurityUtil.testSocketAddrWithIP(TestSecurityUtil.java:334)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:242)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:137)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
> Results :
> Failed tests:   
> testSocketAddrWithIP(org.apache.hadoop.security.TestSecurityUtil): 
> expected

[jira] [Updated] (HADOOP-10291) TestSecurityUtil#testSocketAddrWithIP fails

2014-01-28 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-10291:
---

  Resolution: Fixed
   Fix Version/s: 2.3.0
  3.0.0
Target Version/s: 2.3.0
  Status: Resolved  (was: Patch Available)

+1 for the patch. Committed to trunk, branch-2 and branch-2.3.

Thanks for the contribution [~mitdesai].

> TestSecurityUtil#testSocketAddrWithIP fails
> ---
>
> Key: HADOOP-10291
> URL: https://issues.apache.org/jira/browse/HADOOP-10291
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Mit Desai
>Assignee: Mit Desai
>  Labels: java7
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HADOOP-10291.patch
>
>
> testSocketAddrWithIP fails with Assertion Error
> {noformat}
> Running org.apache.hadoop.security.TestSecurityUtil
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.389 sec <<< 
> FAILURE!
> testSocketAddrWithIP(org.apache.hadoop.security.TestSecurityUtil)  Time 
> elapsed: 275 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<127.0.0.1:123> but was:
>   at org.junit.Assert.fail(Assert.java:93)
>   at org.junit.Assert.failNotEquals(Assert.java:647)
>   at org.junit.Assert.assertEquals(Assert.java:128)
>   at org.junit.Assert.assertEquals(Assert.java:147)
>   at 
> org.apache.hadoop.security.TestSecurityUtil.verifyTokenService(TestSecurityUtil.java:271)
>   at 
> org.apache.hadoop.security.TestSecurityUtil.verifyAddress(TestSecurityUtil.java:290)
>   at 
> org.apache.hadoop.security.TestSecurityUtil.verifyServiceAddr(TestSecurityUtil.java:306)
>   at 
> org.apache.hadoop.security.TestSecurityUtil.testSocketAddrWithIP(TestSecurityUtil.java:334)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:242)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:137)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
> Results :
> Failed tests:   
> testSocketAddrWithIP(org.apache.hadoop.security.TestSecurityUtil): 
> expected:<127.0.0.1:123> but was:
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10295) Allow distcp to automatically identify the checksum type of source files and use it for the target

2014-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884984#comment-13884984
 ] 

Hadoop QA commented on HADOOP-10295:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12625778/HADOOP-10295.002.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-tools/hadoop-distcp.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3498//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3498//console

This message is automatically generated.

> Allow distcp to automatically identify the checksum type of source files and 
> use it for the target
> --
>
> Key: HADOOP-10295
> URL: https://issues.apache.org/jira/browse/HADOOP-10295
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HADOOP-10295.000.patch, HADOOP-10295.002.patch, 
> hadoop-10295.patch
>
>
> Currently while doing distcp, users can use "-Ddfs.checksum.type" to specify 
> the checksum type in the target FS. This works fine if all the source files 
> are using the same checksum type. If files in the source cluster have mixed 
> types of checksum, users have to either use "-skipcrccheck" or have checksum 
> mismatching exception. Thus we may need to consider adding a new option to 
> distcp so that it can automatically identify the original checksum type of 
> each source file and use the same checksum type in the target FS. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-8643) hadoop-client should exclude hadoop-annotations from hadoop-common dependency

2014-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884971#comment-13884971
 ] 

Hadoop QA commented on HADOOP-8643:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12538987/hadoop-8643.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-client.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3497//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3497//console

This message is automatically generated.

> hadoop-client should exclude hadoop-annotations from hadoop-common dependency
> -
>
> Key: HADOOP-8643
> URL: https://issues.apache.org/jira/browse/HADOOP-8643
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.2-alpha
>Reporter: Alejandro Abdelnur
>Priority: Minor
> Fix For: 2.3.0
>
> Attachments: hadoop-8643.txt
>
>
> When reviewing HADOOP-8370 I've missed that changing the scope to compile for 
> hadoop-annotations in hadoop-common it would make hadoop-annotations to 
> bubble up in hadoop-client. Because of this we need to explicitly exclude it.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-9646) Inconsistent exception specifications in FileUtils#chmod

2014-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884963#comment-13884963
 ] 

Hadoop QA commented on HADOOP-9646:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12587901/HADOOP-9646.002.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3496//console

This message is automatically generated.

> Inconsistent exception specifications in FileUtils#chmod
> 
>
> Key: HADOOP-9646
> URL: https://issues.apache.org/jira/browse/HADOOP-9646
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Fix For: 2.3.0
>
> Attachments: HADOOP-9646.001.patch, HADOOP-9646.002.patch
>
>
> There are two FileUtils#chmod methods:
> {code}
> public static int chmod(String filename, String perm
>   ) throws IOException, InterruptedException;
> public static int chmod(String filename, String perm, boolean recursive)
> throws IOException;
> {code}
> The first one just calls the second one with {{recursive = false}}, but 
> despite that it is declared as throwing {{InterruptedException}}, something 
> the second one doesn't declare.
> The new Java7 chmod API, which we will transition to once JDK6 support is 
> dropped, does *not* throw {{InterruptedException}}
> See 
> [http://docs.oracle.com/javase/7/docs/api/java/nio/file/Files.html#setOwner(java.nio.file.Path,
>  java.nio.file.attribute.UserPrincipal)]
> So we should make these consistent by removing the {{InterruptedException}}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-9822) create constant MAX_CAPACITY in RetryCache rather than hard-coding 16 in RetryCache constructor

2014-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884960#comment-13884960
 ] 

Hadoop QA commented on HADOOP-9822:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12596295/HADOOP-9822.2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3495//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3495//console

This message is automatically generated.

> create constant MAX_CAPACITY in RetryCache rather than hard-coding 16 in 
> RetryCache constructor
> ---
>
> Key: HADOOP-9822
> URL: https://issues.apache.org/jira/browse/HADOOP-9822
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.1.1-beta, 2.3.0
>Reporter: Tsuyoshi OZAWA
>Assignee: Tsuyoshi OZAWA
>Priority: Minor
> Attachments: HADOOP-9822.1.patch, HADOOP-9822.2.patch
>
>
> The magic number "16" is also used in ClientId.BYTE_LENGTH, so hard-coding 
> magic number "16" is a bit confusing.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-9940) Make http jetty request logger NCSARequestLog fully configurable through log4j

2014-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884957#comment-13884957
 ] 

Hadoop QA commented on HADOOP-9940:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12602453/HADOOP-9940.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3494//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3494//console

This message is automatically generated.

> Make http jetty request logger NCSARequestLog fully configurable through log4j
> --
>
> Key: HADOOP-9940
> URL: https://issues.apache.org/jira/browse/HADOOP-9940
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 0.23.10, 2.3.0
>Reporter: Jonathan Eagles
> Attachments: HADOOP-9940.patch, HADOOP-9940.patch
>
>
> Some options such a log date format are not available
> http://grepcode.com/file/repo1.maven.org/maven2/org.mortbay.jetty/jetty/6.1.26/org/mortbay/jetty/NCSARequestLog.java?av=f



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-9477) posixGroups support for LDAP groups mapping service

2014-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884954#comment-13884954
 ] 

Hadoop QA commented on HADOOP-9477:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12581898/HADOOP-9477.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3493//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3493//console

This message is automatically generated.

> posixGroups support for LDAP groups mapping service
> ---
>
> Key: HADOOP-9477
> URL: https://issues.apache.org/jira/browse/HADOOP-9477
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.0.4-alpha
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 2.3.0
>
> Attachments: HADOOP-9477.patch, HADOOP-9477.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> It would be nice to support posixGroups for LdapGroupsMapping service. Below 
> is from current description for the provider:
> hadoop.security.group.mapping.ldap.search.filter.group:
> An additional filter to use when searching for LDAP groups. This should be
> changed when resolving groups against a non-Active Directory installation.
> posixGroups are currently not a supported group class.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10295) Allow distcp to automatically identify the checksum type of source files and use it for the target

2014-01-28 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HADOOP-10295:
---

Status: Patch Available  (was: Open)

> Allow distcp to automatically identify the checksum type of source files and 
> use it for the target
> --
>
> Key: HADOOP-10295
> URL: https://issues.apache.org/jira/browse/HADOOP-10295
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HADOOP-10295.000.patch, HADOOP-10295.002.patch, 
> hadoop-10295.patch
>
>
> Currently while doing distcp, users can use "-Ddfs.checksum.type" to specify 
> the checksum type in the target FS. This works fine if all the source files 
> are using the same checksum type. If files in the source cluster have mixed 
> types of checksum, users have to either use "-skipcrccheck" or have checksum 
> mismatching exception. Thus we may need to consider adding a new option to 
> distcp so that it can automatically identify the original checksum type of 
> each source file and use the same checksum type in the target FS. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10295) Allow distcp to automatically identify the checksum type of source files and use it for the target

2014-01-28 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HADOOP-10295:
---

Attachment: HADOOP-10295.002.patch

Thanks for the comments, Kihwal and Sangjin! So this 002 patch is based on my 
001 patch and Laurent's patch, and it also preserve the block size when 
processing the preserving checksum type option. 

I've tested in my local cluster with the patch. In my test I simply generate 
some files with different checksum types, and run distcp with/without "-pc". 
The distcp succeeded when -pc is enabled.

> Allow distcp to automatically identify the checksum type of source files and 
> use it for the target
> --
>
> Key: HADOOP-10295
> URL: https://issues.apache.org/jira/browse/HADOOP-10295
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HADOOP-10295.000.patch, HADOOP-10295.002.patch, 
> hadoop-10295.patch
>
>
> Currently while doing distcp, users can use "-Ddfs.checksum.type" to specify 
> the checksum type in the target FS. This works fine if all the source files 
> are using the same checksum type. If files in the source cluster have mixed 
> types of checksum, users have to either use "-skipcrccheck" or have checksum 
> mismatching exception. Thus we may need to consider adding a new option to 
> distcp so that it can automatically identify the original checksum type of 
> each source file and use the same checksum type in the target FS. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10274) Lower the logging level from ERROR to WARN for UGI.doAs method

2014-01-28 Thread takeshi.miao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884945#comment-13884945
 ] 

takeshi.miao commented on HADOOP-10274:
---

stack & Uma Maheswara Rao G
Thanks for you guys review :)

> Lower the logging level from ERROR to WARN for UGI.doAs method
> --
>
> Key: HADOOP-10274
> URL: https://issues.apache.org/jira/browse/HADOOP-10274
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 1.0.4
> Environment: hadoop-1.0.4, hbase-0.94.16, 
> krb5-server-1.6.1-31.el5_3.3, CentOS release 5.3 (Final)
>Reporter: takeshi.miao
>Assignee: takeshi.miao
>Priority: Minor
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HADOOP-10274-trunk-v01.patch
>
>
> Recently we got the error msg "Request is a replay (34) - PROCESS_TGS" while 
> we are using the HBase client API to put data into HBase-0.94.16 with 
> krb5-1.6.1 enabled. The related msg as follows...
> {code}
> [2014-01-15 
> 09:40:38,452][hbase-tablepool-1-thread-3][ERROR][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1124)):
>  PriviledgedActionException as:takeshi_miao@LAB 
> cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Request is a 
> replay (34) - PROCESS_TGS)]
> [2014-01-15 
> 09:40:38,453][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.logPriviledgedAction(UserGroupInformation.java:1143)):
>  PriviledgedAction as:takeshi_miao@LAB 
> from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  
> 
> [2014-01-15 
> 09:40:38,453][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.ipc.SecureClient](org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$1.run(SecureClient.java:213)):
>  Exception encountered while connecting to the server : 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Request is a 
> replay (34) - PROCESS_TGS)]
> [2014-01-15 09:40:38,454][hbase-tablepool-1-thread-3][INFO 
> ][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.reloginFromTicketCache(UserGroupInformation.java:657)):
>  Initiating logout for takeshi_miao@LAB
> [2014-01-15 
> 09:40:38,454][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.logout(UserGroupInformation.java:154)):
>  hadoop logout
> [2014-01-15 09:40:38,454][hbase-tablepool-1-thread-3][INFO 
> ][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.reloginFromTicketCache(UserGroupInformation.java:667)):
>  Initiating re-login for takeshi_miao@LAB
> [2014-01-15 
> 09:40:38,455][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.login(UserGroupInformation.java:146)):
>  hadoop login
> [2014-01-15 
> 09:40:38,456][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.commit(UserGroupInformation.java:95)):
>  hadoop login commit
> [2014-01-15 
> 09:40:38,456][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.commit(UserGroupInformation.java:100)):
>  using existing subject:[takeshi_miao@LAB, UnixPrincipal: takeshi_miao, 
> UnixNumericUserPrincipal: 501, UnixNumericGroupPrincipal [Primary Group]: 
> 501, UnixNumericGroupPrincipal [Supplementary Group]: 502, takeshi_miao@LAB]
> {code}
> Finally, we found that the HBase would doing the retry (5 * 10 times) and 
> recovery this _'request is a replay (34)'_ issue, but based on the HBase user 
> viewpoint, the error msg at first line may be frightful, as we were afraid 
> there was any data loss occurring at the first sight...
> {code}
> [2014-01-15 
> 09:40:38,452][hbase-tablepool-1-thread-3][ERROR][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1124)):
>  PriviledgedActionException as:takeshi_miao@LAB 
> cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Request is a 
> replay (34) - PROCESS_TGS)]
> {code}
> So I'd like to suggest to change the logging level 

[jira] [Commented] (HADOOP-8943) Support multiple group mapping providers

2014-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884936#comment-13884936
 ] 

Hadoop QA commented on HADOOP-8943:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12561104/HADOOP-8943.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3492//console

This message is automatically generated.

> Support multiple group mapping providers
> 
>
> Key: HADOOP-8943
> URL: https://issues.apache.org/jira/browse/HADOOP-8943
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 2.3.0
>
> Attachments: HADOOP-8943.patch, HADOOP-8943.patch, HADOOP-8943.patch
>
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
>   Discussed with Natty about LdapGroupMapping, we need to improve it so that: 
> 1. It's possible to do different group mapping for different 
> users/principals. For example, AD user should go to LdapGroupMapping service 
> for group, but service principals such as hdfs, mapred can still use the 
> default one ShellBasedUnixGroupsMapping; 
> 2. Multiple ADs can be supported to do LdapGroupMapping; 
> 3. It's possible to configure what kind of users/principals (regarding 
> domain/realm is an option) should use which group mapping service/mechanism.
> 4. It's possible to configure and combine multiple existing mapping providers 
> without writing codes implementing new one.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10285) Allow CallQueue impls to be swapped at runtime (part 2: admin interface) Depends on: subtask2

2014-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884884#comment-13884884
 ] 

Hadoop QA commented on HADOOP-10285:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12625700/subtask3_admin_interface.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3491//console

This message is automatically generated.

> Allow CallQueue impls to be swapped at runtime (part 2: admin interface) 
> Depends on: subtask2
> -
>
> Key: HADOOP-10285
> URL: https://issues.apache.org/jira/browse/HADOOP-10285
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Chris Li
> Attachments: subtask3_admin_interface.patch
>
>
> We wish to swap the active call queue during runtime in order to do 
> performance tuning without restarting the namenode.
> This patch adds the ability to refresh the call queue on the namenode, 
> through dfsadmin -refreshCallQueue



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10285) Allow CallQueue impls to be swapped at runtime (part 2: admin interface) Depends on: subtask2

2014-01-28 Thread Chris Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Li updated HADOOP-10285:
--

Status: Patch Available  (was: Open)

> Allow CallQueue impls to be swapped at runtime (part 2: admin interface) 
> Depends on: subtask2
> -
>
> Key: HADOOP-10285
> URL: https://issues.apache.org/jira/browse/HADOOP-10285
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Chris Li
> Attachments: subtask3_admin_interface.patch
>
>
> We wish to swap the active call queue during runtime in order to do 
> performance tuning without restarting the namenode.
> This patch adds the ability to refresh the call queue on the namenode, 
> through dfsadmin -refreshCallQueue



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10285) Allow CallQueue impls to be swapped at runtime (part 2: admin interface) Depends on: subtask2

2014-01-28 Thread Chris Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Li updated HADOOP-10285:
--

Attachment: subtask3_admin_interface.patch

Users can swap the queue by running

hadoop dfsadmin -refreshCallQueue

The code touches a lot of places, but is effectively a mirror of what 
refreshServiceAcl does to accomplish the same thing.

> Allow CallQueue impls to be swapped at runtime (part 2: admin interface) 
> Depends on: subtask2
> -
>
> Key: HADOOP-10285
> URL: https://issues.apache.org/jira/browse/HADOOP-10285
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Chris Li
> Attachments: subtask3_admin_interface.patch
>
>
> We wish to swap the active call queue during runtime in order to do 
> performance tuning without restarting the namenode.
> This patch adds the ability to refresh the call queue on the namenode, 
> through dfsadmin -refreshCallQueue



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10302) Allow CallQueue impls to be swapped at runtime (part 1: internals) Depends on: subtask1

2014-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884828#comment-13884828
 ] 

Hadoop QA commented on HADOOP-10302:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12625691/subtask2_runtime_swap_internal.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3490//console

This message is automatically generated.

> Allow CallQueue impls to be swapped at runtime (part 1: internals) Depends 
> on: subtask1
> ---
>
> Key: HADOOP-10302
> URL: https://issues.apache.org/jira/browse/HADOOP-10302
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Chris Li
> Attachments: subtask2_runtime_swap_internal.patch
>
>
> We wish to swap the active call queue during runtime in order to do 
> performance tuning without restarting the namenode.
> This patch adds only the internals necessary to swap. Part 2 will add a user 
> interface so that it can be used.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10278) Refactor to make CallQueue pluggable

2014-01-28 Thread Chris Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884803#comment-13884803
 ] 

Chris Li commented on HADOOP-10278:
---

I've uploaded a patch for https://issues.apache.org/jira/browse/HADOOP-10302 
which should make some of the design decisions in this patch more clear.

> Refactor to make CallQueue pluggable
> 
>
> Key: HADOOP-10278
> URL: https://issues.apache.org/jira/browse/HADOOP-10278
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Reporter: Chris Li
> Attachments: subtask1.2.patch, subtask1.3.patch, subtask1.patch
>
>
> * Refactor CallQueue into an interface, base, and default implementation that 
> matches today's behavior
> * Make the call queue impl configurable, keyed on port so that we minimize 
> coupling



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10285) Allow CallQueue impls to be swapped at runtime (part 2: admin interface) Depends on: subtask2

2014-01-28 Thread Chris Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Li updated HADOOP-10285:
--

Summary: Allow CallQueue impls to be swapped at runtime (part 2: admin 
interface) Depends on: subtask2  (was: Allow CallQueue impls to be swapped at 
runtime (part 2: admin interface))

> Allow CallQueue impls to be swapped at runtime (part 2: admin interface) 
> Depends on: subtask2
> -
>
> Key: HADOOP-10285
> URL: https://issues.apache.org/jira/browse/HADOOP-10285
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Chris Li
>
> We wish to swap the active call queue during runtime in order to do 
> performance tuning without restarting the namenode.
> This patch adds the ability to refresh the call queue on the namenode, 
> through dfsadmin -refreshCallQueue



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10278) Refactor to make CallQueue pluggable

2014-01-28 Thread Chris Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Li updated HADOOP-10278:
--

Attachment: (was: subtask1.2.patch)

> Refactor to make CallQueue pluggable
> 
>
> Key: HADOOP-10278
> URL: https://issues.apache.org/jira/browse/HADOOP-10278
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Reporter: Chris Li
> Attachments: subtask1.3.patch
>
>
> * Refactor CallQueue into an interface, base, and default implementation that 
> matches today's behavior
> * Make the call queue impl configurable, keyed on port so that we minimize 
> coupling



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10278) Refactor to make CallQueue pluggable

2014-01-28 Thread Chris Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Li updated HADOOP-10278:
--

Attachment: (was: subtask1.patch)

> Refactor to make CallQueue pluggable
> 
>
> Key: HADOOP-10278
> URL: https://issues.apache.org/jira/browse/HADOOP-10278
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Reporter: Chris Li
> Attachments: subtask1.3.patch
>
>
> * Refactor CallQueue into an interface, base, and default implementation that 
> matches today's behavior
> * Make the call queue impl configurable, keyed on port so that we minimize 
> coupling



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10302) Allow CallQueue impls to be swapped at runtime (part 1: internals) Depends on: subtask1

2014-01-28 Thread Chris Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Li updated HADOOP-10302:
--

Status: Patch Available  (was: Open)

> Allow CallQueue impls to be swapped at runtime (part 1: internals) Depends 
> on: subtask1
> ---
>
> Key: HADOOP-10302
> URL: https://issues.apache.org/jira/browse/HADOOP-10302
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Chris Li
> Attachments: subtask2_runtime_swap_internal.patch
>
>
> We wish to swap the active call queue during runtime in order to do 
> performance tuning without restarting the namenode.
> This patch adds only the internals necessary to swap. Part 2 will add a user 
> interface so that it can be used.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10302) Allow CallQueue impls to be swapped at runtime (part 1: internals) Depends on: subtask1

2014-01-28 Thread Chris Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Li updated HADOOP-10302:
--

Summary: Allow CallQueue impls to be swapped at runtime (part 1: internals) 
Depends on: subtask1  (was: Allow CallQueue impls to be swapped at runtime 
(part 1: internals))

> Allow CallQueue impls to be swapped at runtime (part 1: internals) Depends 
> on: subtask1
> ---
>
> Key: HADOOP-10302
> URL: https://issues.apache.org/jira/browse/HADOOP-10302
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Chris Li
> Attachments: subtask2_runtime_swap_internal.patch
>
>
> We wish to swap the active call queue during runtime in order to do 
> performance tuning without restarting the namenode.
> This patch adds only the internals necessary to swap. Part 2 will add a user 
> interface so that it can be used.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10302) Allow CallQueue impls to be swapped at runtime (part 1: internals)

2014-01-28 Thread Chris Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Li updated HADOOP-10302:
--

Attachment: (was: subtask2_runtime_swap_internal.patch)

> Allow CallQueue impls to be swapped at runtime (part 1: internals)
> --
>
> Key: HADOOP-10302
> URL: https://issues.apache.org/jira/browse/HADOOP-10302
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Chris Li
> Attachments: subtask2_runtime_swap_internal.patch
>
>
> We wish to swap the active call queue during runtime in order to do 
> performance tuning without restarting the namenode.
> This patch adds only the internals necessary to swap. Part 2 will add a user 
> interface so that it can be used.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10302) Allow CallQueue impls to be swapped at runtime (part 1: internals)

2014-01-28 Thread Chris Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Li updated HADOOP-10302:
--

Attachment: subtask2_runtime_swap_internal.patch

> Allow CallQueue impls to be swapped at runtime (part 1: internals)
> --
>
> Key: HADOOP-10302
> URL: https://issues.apache.org/jira/browse/HADOOP-10302
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Chris Li
> Attachments: subtask2_runtime_swap_internal.patch
>
>
> We wish to swap the active call queue during runtime in order to do 
> performance tuning without restarting the namenode.
> This patch adds only the internals necessary to swap. Part 2 will add a user 
> interface so that it can be used.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10302) Allow CallQueue impls to be swapped at runtime (part 1: internals)

2014-01-28 Thread Chris Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Li updated HADOOP-10302:
--

Attachment: subtask2_runtime_swap_internal.patch

Depends on subtask1

Swapping is done through the reconfigure(CallQueue newQ) method on custom 
CallQueue impls.  The old call queue is given the responsibility of swapping to 
the new queue, which it will either accept or reject.

Both old and new queues are frozen during a swap, not accepting new takes or 
puts.

When queues have been swapped (or not in case of failure), they are unfrozen. 
Producers and consumers waiting on the old queue are awoken and begin drawing 
from the new queue.

CallQueueBase is modified in order to handle waiting() while frozen and drawing 
from the next queue after a successful swap.


> Allow CallQueue impls to be swapped at runtime (part 1: internals)
> --
>
> Key: HADOOP-10302
> URL: https://issues.apache.org/jira/browse/HADOOP-10302
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Chris Li
> Attachments: subtask2_runtime_swap_internal.patch
>
>
> We wish to swap the active call queue during runtime in order to do 
> performance tuning without restarting the namenode.
> This patch adds only the internals necessary to swap. Part 2 will add a user 
> interface so that it can be used.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10302) Allow CallQueue impls to be swapped at runtime (part 1: internals)

2014-01-28 Thread Chris Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Li updated HADOOP-10302:
--

Description: 
We wish to swap the active call queue during runtime in order to do performance 
tuning without restarting the namenode.

This patch adds only the internals necessary to swap. Part 2 will add a user 
interface so that it can be used.

> Allow CallQueue impls to be swapped at runtime (part 1: internals)
> --
>
> Key: HADOOP-10302
> URL: https://issues.apache.org/jira/browse/HADOOP-10302
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Chris Li
>
> We wish to swap the active call queue during runtime in order to do 
> performance tuning without restarting the namenode.
> This patch adds only the internals necessary to swap. Part 2 will add a user 
> interface so that it can be used.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10285) Allow CallQueue impls to be swapped at runtime (part 2: admin interface)

2014-01-28 Thread Chris Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Li updated HADOOP-10285:
--

Summary: Allow CallQueue impls to be swapped at runtime (part 2: admin 
interface)  (was: Allow CallQueue impls to be swapped at runtime)

> Allow CallQueue impls to be swapped at runtime (part 2: admin interface)
> 
>
> Key: HADOOP-10285
> URL: https://issues.apache.org/jira/browse/HADOOP-10285
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Chris Li
>
> We wish to swap the active call queue during runtime in order to do 
> performance tuning without restarting the namenode.
> This patch adds the ability to refresh the call queue on the namenode, 
> through dfsadmin -refreshCallQueue



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HADOOP-10302) Allow CallQueue impls to be swapped at runtime (part 1: internals)

2014-01-28 Thread Chris Li (JIRA)
Chris Li created HADOOP-10302:
-

 Summary: Allow CallQueue impls to be swapped at runtime (part 1: 
internals)
 Key: HADOOP-10302
 URL: https://issues.apache.org/jira/browse/HADOOP-10302
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Chris Li






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10274) Lower the logging level from ERROR to WARN for UGI.doAs method

2014-01-28 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884745#comment-13884745
 ] 

stack commented on HADOOP-10274:


Thank you [~umamaheswararao] for the bit of admin interjection.

> Lower the logging level from ERROR to WARN for UGI.doAs method
> --
>
> Key: HADOOP-10274
> URL: https://issues.apache.org/jira/browse/HADOOP-10274
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 1.0.4
> Environment: hadoop-1.0.4, hbase-0.94.16, 
> krb5-server-1.6.1-31.el5_3.3, CentOS release 5.3 (Final)
>Reporter: takeshi.miao
>Assignee: takeshi.miao
>Priority: Minor
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HADOOP-10274-trunk-v01.patch
>
>
> Recently we got the error msg "Request is a replay (34) - PROCESS_TGS" while 
> we are using the HBase client API to put data into HBase-0.94.16 with 
> krb5-1.6.1 enabled. The related msg as follows...
> {code}
> [2014-01-15 
> 09:40:38,452][hbase-tablepool-1-thread-3][ERROR][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1124)):
>  PriviledgedActionException as:takeshi_miao@LAB 
> cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Request is a 
> replay (34) - PROCESS_TGS)]
> [2014-01-15 
> 09:40:38,453][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.logPriviledgedAction(UserGroupInformation.java:1143)):
>  PriviledgedAction as:takeshi_miao@LAB 
> from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  
> 
> [2014-01-15 
> 09:40:38,453][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.ipc.SecureClient](org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$1.run(SecureClient.java:213)):
>  Exception encountered while connecting to the server : 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Request is a 
> replay (34) - PROCESS_TGS)]
> [2014-01-15 09:40:38,454][hbase-tablepool-1-thread-3][INFO 
> ][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.reloginFromTicketCache(UserGroupInformation.java:657)):
>  Initiating logout for takeshi_miao@LAB
> [2014-01-15 
> 09:40:38,454][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.logout(UserGroupInformation.java:154)):
>  hadoop logout
> [2014-01-15 09:40:38,454][hbase-tablepool-1-thread-3][INFO 
> ][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.reloginFromTicketCache(UserGroupInformation.java:667)):
>  Initiating re-login for takeshi_miao@LAB
> [2014-01-15 
> 09:40:38,455][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.login(UserGroupInformation.java:146)):
>  hadoop login
> [2014-01-15 
> 09:40:38,456][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.commit(UserGroupInformation.java:95)):
>  hadoop login commit
> [2014-01-15 
> 09:40:38,456][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.commit(UserGroupInformation.java:100)):
>  using existing subject:[takeshi_miao@LAB, UnixPrincipal: takeshi_miao, 
> UnixNumericUserPrincipal: 501, UnixNumericGroupPrincipal [Primary Group]: 
> 501, UnixNumericGroupPrincipal [Supplementary Group]: 502, takeshi_miao@LAB]
> {code}
> Finally, we found that the HBase would doing the retry (5 * 10 times) and 
> recovery this _'request is a replay (34)'_ issue, but based on the HBase user 
> viewpoint, the error msg at first line may be frightful, as we were afraid 
> there was any data loss occurring at the first sight...
> {code}
> [2014-01-15 
> 09:40:38,452][hbase-tablepool-1-thread-3][ERROR][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1124)):
>  PriviledgedActionException as:takeshi_miao@LAB 
> cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Request is a 
> replay (34) - PROCESS_TGS)]
> {code}
> So I'd like to suggest to change the logging level from '_E

[jira] [Updated] (HADOOP-10285) Allow CallQueue impls to be swapped at runtime

2014-01-28 Thread Chris Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Li updated HADOOP-10285:
--

Description: 
We wish to swap the active call queue during runtime in order to do performance 
tuning without restarting the namenode.
This patch adds the ability to refresh the call queue on the namenode, through 
dfsadmin -refreshCallQueue



> Allow CallQueue impls to be swapped at runtime
> --
>
> Key: HADOOP-10285
> URL: https://issues.apache.org/jira/browse/HADOOP-10285
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Chris Li
>
> We wish to swap the active call queue during runtime in order to do 
> performance tuning without restarting the namenode.
> This patch adds the ability to refresh the call queue on the namenode, 
> through dfsadmin -refreshCallQueue



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10301) AuthenticationFilter should return Forbidden for failed authentication

2014-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884740#comment-13884740
 ] 

Hadoop QA commented on HADOOP-10301:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12625672/HADOOP-10301.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-auth.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3489//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3489//console

This message is automatically generated.

> AuthenticationFilter should return Forbidden for failed authentication
> --
>
> Key: HADOOP-10301
> URL: https://issues.apache.org/jira/browse/HADOOP-10301
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Blocker
> Attachments: HADOOP-10301.branch-23.patch, HADOOP-10301.patch
>
>
> The hadoop-auth AuthenticationFilter returns a 401 Unauthorized without a 
> WWW-Authenticate headers.  The is illegal per the HTTP RPC and causes a NPE 
> in the HttpUrlConnection.
> This is half of a fix that affects webhdfs.  See HDFS-4564.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10278) Refactor to make CallQueue pluggable

2014-01-28 Thread Chris Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884720#comment-13884720
 ] 

Chris Li commented on HADOOP-10278:
---

Thanks for checking it out.

1. In a later patch I'll be introducing the ability to swap the call queue at 
runtime, which is essential for performance tuning without restarting the 
namenode. The FIFOCallQueue responds to methods needed to accomplish this 
transparently to the server.

2. I would have preferred this myself (and earlier versions did this), but we 
will need control of the locks to do runtime queue swapping. In any case, the 
FairCallQueue (which should be coming in subtask5) will use this same locking 
code, so refactoring it into the base makes things cleaner.

I will start uploading the other subtask patches

> Refactor to make CallQueue pluggable
> 
>
> Key: HADOOP-10278
> URL: https://issues.apache.org/jira/browse/HADOOP-10278
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Reporter: Chris Li
> Attachments: subtask1.2.patch, subtask1.3.patch, subtask1.patch
>
>
> * Refactor CallQueue into an interface, base, and default implementation that 
> matches today's behavior
> * Make the call queue impl configurable, keyed on port so that we minimize 
> coupling



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10301) AuthenticationFilter should return Forbidden for failed authentication

2014-01-28 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-10301:
-

Status: Patch Available  (was: Open)

> AuthenticationFilter should return Forbidden for failed authentication
> --
>
> Key: HADOOP-10301
> URL: https://issues.apache.org/jira/browse/HADOOP-10301
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.0-alpha, 0.23.0, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Blocker
> Attachments: HADOOP-10301.branch-23.patch, HADOOP-10301.patch
>
>
> The hadoop-auth AuthenticationFilter returns a 401 Unauthorized without a 
> WWW-Authenticate headers.  The is illegal per the HTTP RPC and causes a NPE 
> in the HttpUrlConnection.
> This is half of a fix that affects webhdfs.  See HDFS-4564.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10301) AuthenticationFilter should return Forbidden for failed authentication

2014-01-28 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-10301:
-

Attachment: HADOOP-10301.patch
HADOOP-10301.branch-23.patch

> AuthenticationFilter should return Forbidden for failed authentication
> --
>
> Key: HADOOP-10301
> URL: https://issues.apache.org/jira/browse/HADOOP-10301
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Blocker
> Attachments: HADOOP-10301.branch-23.patch, HADOOP-10301.patch
>
>
> The hadoop-auth AuthenticationFilter returns a 401 Unauthorized without a 
> WWW-Authenticate headers.  The is illegal per the HTTP RPC and causes a NPE 
> in the HttpUrlConnection.
> This is half of a fix that affects webhdfs.  See HDFS-4564.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HADOOP-10301) AuthenticationFilter should return Forbidden for failed authentication

2014-01-28 Thread Daryn Sharp (JIRA)
Daryn Sharp created HADOOP-10301:


 Summary: AuthenticationFilter should return Forbidden for failed 
authentication
 Key: HADOOP-10301
 URL: https://issues.apache.org/jira/browse/HADOOP-10301
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0-alpha, 0.23.0, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Blocker


The hadoop-auth AuthenticationFilter returns a 401 Unauthorized without a 
WWW-Authenticate headers.  The is illegal per the HTTP RPC and causes a NPE in 
the HttpUrlConnection.

This is half of a fix that affects webhdfs.  See HDFS-4564.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10278) Refactor to make CallQueue pluggable

2014-01-28 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884698#comment-13884698
 ] 

Daryn Sharp commented on HADOOP-10278:
--

Upon quick glance it looks much cleaner.  Questions:
# What advantage will the custom fifo call queue offer over a standard java 
queue?
# Instead of the new base class providing a concrete queue implementation 
(locking and all), is it possible for the custom queues to using a containing 
relationship with a standard java queue?

> Refactor to make CallQueue pluggable
> 
>
> Key: HADOOP-10278
> URL: https://issues.apache.org/jira/browse/HADOOP-10278
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Reporter: Chris Li
> Attachments: subtask1.2.patch, subtask1.3.patch, subtask1.patch
>
>
> * Refactor CallQueue into an interface, base, and default implementation that 
> matches today's behavior
> * Make the call queue impl configurable, keyed on port so that we minimize 
> coupling



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10300) Allowed deferred sending of call responses

2014-01-28 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884525#comment-13884525
 ] 

Suresh Srinivas commented on HADOOP-10300:
--

Big +1 for this feature. This will be able to reduce the number of handlers we 
currently need. Only thing that we need to protect is accepting too many 
requests and responding to it becomes a bottleneck. That can be addressed as we 
continue to work on this issue.

> Allowed deferred sending of call responses
> --
>
> Key: HADOOP-10300
> URL: https://issues.apache.org/jira/browse/HADOOP-10300
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>
> RPC handlers currently do not return until the RPC call completes and 
> response is sent, or a partially sent response has been queued for the 
> responder.  It would be useful for a proxy method to notify the handler to 
> not yet the send the call's response.
> An potential use case is a namespace handler in the NN might want to return 
> before the edit log is synced so it can service more requests and allow 
> increased batching of edits per sync.  Background syncing could later trigger 
> the sending of the call response to the client.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10278) Refactor to make CallQueue pluggable

2014-01-28 Thread Chris Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884494#comment-13884494
 ] 

Chris Li commented on HADOOP-10278:
---

Hi [~daryn],

Could you take a look please?

Thanks,

Chris

> Refactor to make CallQueue pluggable
> 
>
> Key: HADOOP-10278
> URL: https://issues.apache.org/jira/browse/HADOOP-10278
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Reporter: Chris Li
> Attachments: subtask1.2.patch, subtask1.3.patch, subtask1.patch
>
>
> * Refactor CallQueue into an interface, base, and default implementation that 
> matches today's behavior
> * Make the call queue impl configurable, keyed on port so that we minimize 
> coupling



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HADOOP-10300) Allowed deferred sending of call responses

2014-01-28 Thread Daryn Sharp (JIRA)
Daryn Sharp created HADOOP-10300:


 Summary: Allowed deferred sending of call responses
 Key: HADOOP-10300
 URL: https://issues.apache.org/jira/browse/HADOOP-10300
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp


RPC handlers currently do not return until the RPC call completes and response 
is sent, or a partially sent response has been queued for the responder.  It 
would be useful for a proxy method to notify the handler to not yet the send 
the call's response.

An potential use case is a namespace handler in the NN might want to return 
before the edit log is synced so it can service more requests and allow 
increased batching of edits per sync.  Background syncing could later trigger 
the sending of the call response to the client.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10272) Hadoop 2 "-copyFromLocal" fail when source is a folder and there are spaces in the path

2014-01-28 Thread Chuan Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chuan Liu updated HADOOP-10272:
---

Resolution: Duplicate
Status: Resolved  (was: Patch Available)

OK. I just found out this is a duplicate of HDFS-4329 when trying to rebase my 
patch to trunk. Resolving as duplicated.

> Hadoop 2 "-copyFromLocal" fail when source is a folder and there are spaces 
> in the path
> ---
>
> Key: HADOOP-10272
> URL: https://issues.apache.org/jira/browse/HADOOP-10272
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Shuaishuai Nie
>Assignee: Chuan Liu
> Attachments: HADOOP-10272.patch
>
>
> Repro steps:
> with folder structure like: /ab/c d/ef.txt
> hadoop command (hadoop fs -copyFromLocal /ab/ /) or (hadoop fs -copyFromLocal 
> "/ab/c d/" /) fail with error:
> copyFromLocal: File file:/ab/c%20d/ef.txt does not exist
> However command (hadoop fs -copyFromLocal "/ab/c d/ef.txt" /) success.
> Seems like hadoop treat file and directory differently when "copyFromLocal".
> This only happens in Hadoop 2 and causing 2 Hive unit test failures 
> (external_table_with_space_in_location_path.q and 
> load_hdfs_file_with_space_in_the_name.q).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10272) Hadoop 2 "-copyFromLocal" fail when source is a folder and there are spaces in the path

2014-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884422#comment-13884422
 ] 

Hadoop QA commented on HADOOP-10272:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12625621/HADOOP-10272.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3488//console

This message is automatically generated.

> Hadoop 2 "-copyFromLocal" fail when source is a folder and there are spaces 
> in the path
> ---
>
> Key: HADOOP-10272
> URL: https://issues.apache.org/jira/browse/HADOOP-10272
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Shuaishuai Nie
>Assignee: Chuan Liu
> Attachments: HADOOP-10272.patch
>
>
> Repro steps:
> with folder structure like: /ab/c d/ef.txt
> hadoop command (hadoop fs -copyFromLocal /ab/ /) or (hadoop fs -copyFromLocal 
> "/ab/c d/" /) fail with error:
> copyFromLocal: File file:/ab/c%20d/ef.txt does not exist
> However command (hadoop fs -copyFromLocal "/ab/c d/ef.txt" /) success.
> Seems like hadoop treat file and directory differently when "copyFromLocal".
> This only happens in Hadoop 2 and causing 2 Hive unit test failures 
> (external_table_with_space_in_location_path.q and 
> load_hdfs_file_with_space_in_the_name.q).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10272) Hadoop 2 "-copyFromLocal" fail when source is a folder and there are spaces in the path

2014-01-28 Thread Chuan Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chuan Liu updated HADOOP-10272:
---

Affects Version/s: 3.0.0
   Status: Patch Available  (was: Open)

> Hadoop 2 "-copyFromLocal" fail when source is a folder and there are spaces 
> in the path
> ---
>
> Key: HADOOP-10272
> URL: https://issues.apache.org/jira/browse/HADOOP-10272
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.2.0, 3.0.0
>Reporter: Shuaishuai Nie
>Assignee: Chuan Liu
> Attachments: HADOOP-10272.patch
>
>
> Repro steps:
> with folder structure like: /ab/c d/ef.txt
> hadoop command (hadoop fs -copyFromLocal /ab/ /) or (hadoop fs -copyFromLocal 
> "/ab/c d/" /) fail with error:
> copyFromLocal: File file:/ab/c%20d/ef.txt does not exist
> However command (hadoop fs -copyFromLocal "/ab/c d/ef.txt" /) success.
> Seems like hadoop treat file and directory differently when "copyFromLocal".
> This only happens in Hadoop 2 and causing 2 Hive unit test failures 
> (external_table_with_space_in_location_path.q and 
> load_hdfs_file_with_space_in_the_name.q).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10272) Hadoop 2 "-copyFromLocal" fail when source is a folder and there are spaces in the path

2014-01-28 Thread Chuan Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chuan Liu updated HADOOP-10272:
---

Attachment: HADOOP-10272.patch

The root cause is that {{PathData.getStringForChildPath()}} will return a path 
string that is encoded, i.e. ' ' will be encoded as '%20'. This path string 
will later be used again in the Path constructor to pass to URI constructor, 
and lead to double encoding. The wrongly encoded path will later lead to a copy 
failure, because the wrongly encoded path is on longer the original user input 
path. I suspect Unix/Linux also has this problem unless the Java URI 
implementation is different on Unix/Linux platform. Attaching a patch that 
address the issue. 

> Hadoop 2 "-copyFromLocal" fail when source is a folder and there are spaces 
> in the path
> ---
>
> Key: HADOOP-10272
> URL: https://issues.apache.org/jira/browse/HADOOP-10272
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.2.0
>Reporter: Shuaishuai Nie
>Assignee: Chuan Liu
> Attachments: HADOOP-10272.patch
>
>
> Repro steps:
> with folder structure like: /ab/c d/ef.txt
> hadoop command (hadoop fs -copyFromLocal /ab/ /) or (hadoop fs -copyFromLocal 
> "/ab/c d/" /) fail with error:
> copyFromLocal: File file:/ab/c%20d/ef.txt does not exist
> However command (hadoop fs -copyFromLocal "/ab/c d/ef.txt" /) success.
> Seems like hadoop treat file and directory differently when "copyFromLocal".
> This only happens in Hadoop 2 and causing 2 Hive unit test failures 
> (external_table_with_space_in_location_path.q and 
> load_hdfs_file_with_space_in_the_name.q).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10295) Allow distcp to automatically identify the checksum type of source files and use it for the target

2014-01-28 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884301#comment-13884301
 ] 

Sangjin Lee commented on HADOOP-10295:
--

Agree the option needs to mean that the checksum algorithm *and* the blocksize 
are preserved.

> Allow distcp to automatically identify the checksum type of source files and 
> use it for the target
> --
>
> Key: HADOOP-10295
> URL: https://issues.apache.org/jira/browse/HADOOP-10295
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HADOOP-10295.000.patch, hadoop-10295.patch
>
>
> Currently while doing distcp, users can use "-Ddfs.checksum.type" to specify 
> the checksum type in the target FS. This works fine if all the source files 
> are using the same checksum type. If files in the source cluster have mixed 
> types of checksum, users have to either use "-skipcrccheck" or have checksum 
> mismatching exception. Thus we may need to consider adding a new option to 
> distcp so that it can automatically identify the original checksum type of 
> each source file and use the same checksum type in the target FS. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10255) Rename HttpServer to HttpServer2 to retain older HttpServer in branch-2 for compatibility

2014-01-28 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884262#comment-13884262
 ] 

stack commented on HADOOP-10255:


[~sureshms] Thanks for committing.  On 'We need to agree upon a release to 
align this change.', lets align on hadoop3 (hbase 0.96/0.98 depend on 
httpserver and should be able to run on any 2.x hadoops).  Thanks.

> Rename HttpServer to HttpServer2 to retain older HttpServer in branch-2 for 
> compatibility
> -
>
> Key: HADOOP-10255
> URL: https://issues.apache.org/jira/browse/HADOOP-10255
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>Priority: Blocker
> Fix For: 2.3.0
>
> Attachments: HADOOP-10255-branch2.000.patch, HADOOP-10255.000.patch, 
> HADOOP-10255.001.patch, HADOOP-10255.002.patch, HADOOP-10255.003.patch, 
> HADOOP-10255.003.patch
>
>
> As suggested in HADOOP-10253, HBase needs a temporary copy of {{HttpServer}} 
> from branch-2.2 to make sure it works across multiple 2.x releases.
> This patch renames the current {{HttpServer}} into {{HttpServer2}}, and bring 
>  the {{HttpServer}} in branch-2.2 into the repository.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10295) Allow distcp to automatically identify the checksum type of source files and use it for the target

2014-01-28 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884212#comment-13884212
 ] 

Kihwal Lee commented on HADOOP-10295:
-

Thanks for working on this, Jing.  One thing to note is that the block size 
needs to be identical in addition to the checksum parameters in order for the 
checksums to match. So it might make more sense to introduce an option to 
preserve the two together.

> Allow distcp to automatically identify the checksum type of source files and 
> use it for the target
> --
>
> Key: HADOOP-10295
> URL: https://issues.apache.org/jira/browse/HADOOP-10295
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HADOOP-10295.000.patch, hadoop-10295.patch
>
>
> Currently while doing distcp, users can use "-Ddfs.checksum.type" to specify 
> the checksum type in the target FS. This works fine if all the source files 
> are using the same checksum type. If files in the source cluster have mixed 
> types of checksum, users have to either use "-skipcrccheck" or have checksum 
> mismatching exception. Thus we may need to consider adding a new option to 
> distcp so that it can automatically identify the original checksum type of 
> each source file and use the same checksum type in the target FS. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10295) Allow distcp to automatically identify the checksum type of source files and use it for the target

2014-01-28 Thread Laurent Goujon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884206#comment-13884206
 ] 

Laurent Goujon commented on HADOOP-10295:
-

For point 3, I was using {{getFileDefault()}} because it was the previous 
behavior, and in {{CopyMapper.map(...)}, once copy succeed, a call is made to 
{{DistCpUtils.preserve(...)}} which sets the owner, group, replication and 
permissions. Should it be refactored?

> Allow distcp to automatically identify the checksum type of source files and 
> use it for the target
> --
>
> Key: HADOOP-10295
> URL: https://issues.apache.org/jira/browse/HADOOP-10295
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HADOOP-10295.000.patch, hadoop-10295.patch
>
>
> Currently while doing distcp, users can use "-Ddfs.checksum.type" to specify 
> the checksum type in the target FS. This works fine if all the source files 
> are using the same checksum type. If files in the source cluster have mixed 
> types of checksum, users have to either use "-skipcrccheck" or have checksum 
> mismatching exception. Thus we may need to consider adding a new option to 
> distcp so that it can automatically identify the original checksum type of 
> each source file and use the same checksum type in the target FS. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10255) Rename HttpServer to HttpServer2 to retain older HttpServer in branch-2 for compatibility

2014-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884140#comment-13884140
 ] 

Hudson commented on HADOOP-10255:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1656 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1656/])
HADOOP-10255. Adding missed CHANGES.txt from change 1561959. (suresh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1561961)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
HADOOP-10255. Rename HttpServer to HttpServer2 to retain older HttpServer in 
branch-2 for compatibility. Contributed by Haohui Mai. (suresh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1561959)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ConfServlet.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/AdminAuthorizedServlet.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/jmx/JMXJsonServlet.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/log/LogLevel.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics/MetricsServlet.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/AuthenticationFilterInitializer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/HttpServerFunctionalTest.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestGlobalFilter.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHtmlQuoting.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServerLifecycle.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServerWebapps.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestPathFilter.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestSSLHttpServer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestServletFilter.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/jmx/TestJMXJsonServlet.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/log/TestLogLevel.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestAuthenticationFilter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/SecureDataNodeStarter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/GetImageServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestGetImageServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestTransferFsImage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotTestHelper.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestJobEndNotifier.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestJobEndNotifier.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/

[jira] [Commented] (HADOOP-10288) Explicit reference to Log4JLogger breaks non-log4j users

2014-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884144#comment-13884144
 ] 

Hudson commented on HADOOP-10288:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1656 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1656/])
HADOOP-10288. Explicit reference to Log4JLogger breaks non-log4j users. 
Contributed by Todd Lipcon. (todd: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1561882)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpRequestLog.java


> Explicit reference to Log4JLogger breaks non-log4j users
> 
>
> Key: HADOOP-10288
> URL: https://issues.apache.org/jira/browse/HADOOP-10288
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.4.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Fix For: 2.4.0
>
> Attachments: hadoop-10288.txt
>
>
> In HttpRequestLog, we make an explicit reference to the Log4JLogger class for 
> an instanceof check. If the log4j implementation isn't actually on the 
> classpath, the instanceof check throws NoClassDefFoundError instead of 
> returning false. This means that dependent projects that don't use log4j can 
> no longer embed HttpServer -- typically this is an issue when they use 
> MiniDFSCluster as part of their testing.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10212) Incorrect compile command in Native Library document

2014-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884141#comment-13884141
 ] 

Hudson commented on HADOOP-10212:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1656 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1656/])
HADOOP-10212. Incorrect compile command in Native Library document. 
(Contributed by Akira Ajisaka) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1561838)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/NativeLibraries.apt.vm


> Incorrect compile command in Native Library document
> 
>
> Key: HADOOP-10212
> URL: https://issues.apache.org/jira/browse/HADOOP-10212
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.2.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>  Labels: newbie
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HADOOP-10212.patch
>
>
> The following old command still exists in Native Library document.
> {code}
>$ ant -Dcompile.native=true 
> {code}
> Now maven is used instead of ant.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10203) Connection leak in Jets3tNativeFileSystemStore#retrieveMetadata

2014-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884138#comment-13884138
 ] 

Hudson commented on HADOOP-10203:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1656 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1656/])
HADOOP-10203. Connection leak in Jets3tNativeFileSystemStore#retrieveMetadata. 
Contributed by Andrei Savu. (atm: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1561720)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/Jets3tNativeFileSystemStore.java


> Connection leak in Jets3tNativeFileSystemStore#retrieveMetadata 
> 
>
> Key: HADOOP-10203
> URL: https://issues.apache.org/jira/browse/HADOOP-10203
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
> Environment: CDH 2.0.0-cdh4.5.0 
> (30821ec616ee7a21ee8447949b7c6208a8f1e7d8) 
>Reporter: Andrei Savu
>Assignee: Andrei Savu
> Fix For: 2.4.0
>
> Attachments: HADOOP-10203-trunk.patch, HADOOP-10203.patch
>
>
> Jets3tNativeFileSystemStore#retrieveMetadata  is leaking connections. 
> This affects any client that tries to read many small files very quickly 
> (e.g. distcp from s3 to hdfs with small files blocks due to connection pool 
> starvation). 
> This is not a problem for larger files because when the GC runs any 
> connection that's out of scope will be released in #finalize().
> We are seeing the following log messages as a symptom of this problem:
> {noformat}
> 13/12/26 13:40:01 WARN httpclient.HttpMethodReleaseInputStream: Attempting to 
> release HttpMethod in finalize() as its response data stream has gone out of 
> scope. This attempt will not always succeed and cannot be relied upon! Please 
> ensure response data streams are always fully consumed or closed to avoid 
> HTTP connection starvation.
> 13/12/26 13:40:01 WARN httpclient.HttpMethodReleaseInputStream: Successfully 
> released HttpMethod in finalize(). You were lucky this time... Please ensure 
> response data streams are always fully consumed or closed.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10250) VersionUtil returns wrong value when comparing two versions

2014-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884142#comment-13884142
 ] 

Hudson commented on HADOOP-10250:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1656 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1656/])
HADOOP-10250. VersionUtil returns wrong value when comparing two versions. 
Contributed by Yongjun Zhang. (atm: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1561860)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/VersionUtil.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestVersionUtil.java


> VersionUtil returns wrong value when comparing two versions
> ---
>
> Key: HADOOP-10250
> URL: https://issues.apache.org/jira/browse/HADOOP-10250
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Fix For: 2.4.0
>
> Attachments: HADOOP-10250.001.patch, HADOOP-10250.002.patch, 
> HADOOP-10250.003.patch, HADOOP-10250.004.patch, HADOOP-10250.004.patch
>
>
> VersionUtil.compareVersions("1.0.0-beta-1", "1.0.0") returns 7 instead of 
> negative number, which is wrong, because 1.0.0-beta-1 older than 1.0.0.
>  



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10274) Lower the logging level from ERROR to WARN for UGI.doAs method

2014-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884137#comment-13884137
 ] 

Hudson commented on HADOOP-10274:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1656 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1656/])
HADOOP-10274 Lower the logging level from ERROR to WARN for UGI.doAs method 
(Takeshi Miao via stack) (stack: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1561934)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java


> Lower the logging level from ERROR to WARN for UGI.doAs method
> --
>
> Key: HADOOP-10274
> URL: https://issues.apache.org/jira/browse/HADOOP-10274
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 1.0.4
> Environment: hadoop-1.0.4, hbase-0.94.16, 
> krb5-server-1.6.1-31.el5_3.3, CentOS release 5.3 (Final)
>Reporter: takeshi.miao
>Assignee: takeshi.miao
>Priority: Minor
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HADOOP-10274-trunk-v01.patch
>
>
> Recently we got the error msg "Request is a replay (34) - PROCESS_TGS" while 
> we are using the HBase client API to put data into HBase-0.94.16 with 
> krb5-1.6.1 enabled. The related msg as follows...
> {code}
> [2014-01-15 
> 09:40:38,452][hbase-tablepool-1-thread-3][ERROR][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1124)):
>  PriviledgedActionException as:takeshi_miao@LAB 
> cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Request is a 
> replay (34) - PROCESS_TGS)]
> [2014-01-15 
> 09:40:38,453][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.logPriviledgedAction(UserGroupInformation.java:1143)):
>  PriviledgedAction as:takeshi_miao@LAB 
> from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  
> 
> [2014-01-15 
> 09:40:38,453][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.ipc.SecureClient](org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$1.run(SecureClient.java:213)):
>  Exception encountered while connecting to the server : 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Request is a 
> replay (34) - PROCESS_TGS)]
> [2014-01-15 09:40:38,454][hbase-tablepool-1-thread-3][INFO 
> ][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.reloginFromTicketCache(UserGroupInformation.java:657)):
>  Initiating logout for takeshi_miao@LAB
> [2014-01-15 
> 09:40:38,454][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.logout(UserGroupInformation.java:154)):
>  hadoop logout
> [2014-01-15 09:40:38,454][hbase-tablepool-1-thread-3][INFO 
> ][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.reloginFromTicketCache(UserGroupInformation.java:667)):
>  Initiating re-login for takeshi_miao@LAB
> [2014-01-15 
> 09:40:38,455][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.login(UserGroupInformation.java:146)):
>  hadoop login
> [2014-01-15 
> 09:40:38,456][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.commit(UserGroupInformation.java:95)):
>  hadoop login commit
> [2014-01-15 
> 09:40:38,456][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.commit(UserGroupInformation.java:100)):
>  using existing subject:[takeshi_miao@LAB, UnixPrincipal: takeshi_miao, 
> UnixNumericUserPrincipal: 501, UnixNumericGroupPrincipal [Primary Group]: 
> 501, UnixNumericGroupPrincipal [Supplementary Group]: 502, takeshi_miao@LAB]
> {code}
> Finally, we found that the HBase would doing the retry (5 * 10 times) and 
> recovery this _'request is a replay (34)'_ issue, but based on the HBase user 
> viewpoint, the error msg at first line may be frightful, as we were afraid 
> there was any data loss occurring at the first sight...
> {code}
> [2014-01-15 
> 09:40:38,452][hbase-tablepool-1-thread-3][ERROR][org.apach

[jira] [Commented] (HADOOP-10086) User document for authentication in secure cluster

2014-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884143#comment-13884143
 ] 

Hudson commented on HADOOP-10086:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1656 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1656/])
Fix correct CHANGES.txt for HADOOP-10086 and HADOOP-9982. (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1561819)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
HADOOP-10086. User document for authentication in secure cluster. (Contributed 
by Masatake Iwasaki) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1561776)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/ClusterSetup.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/SecureMode.apt.vm
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* /hadoop/common/trunk/hadoop-project/src/site/site.xml


> User document for authentication in secure cluster
> --
>
> Key: HADOOP-10086
> URL: https://issues.apache.org/jira/browse/HADOOP-10086
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Masatake Iwasaki
>Priority: Minor
>  Labels: documentaion, security
> Fix For: 2.3.0
>
> Attachments: HADOOP-10086-0.patch, HADOOP-10086-1.patch, 
> HADOOP-10086-2.patch, HADOOP-10086-3.patch
>
>
> There are no independent section for basic security features such as 
> authentication and group mapping in the user documentation, though there are 
> sections for "Service Level Authorization" and "HTTP Authentication".
> Creating independent section for authentication and moving contents about 
> secure cluster currently residing in "Cluster Setup" section could be good 
> starting point.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-9830) Typo at http://hadoop.apache.org/docs/current/

2014-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884131#comment-13884131
 ] 

Hudson commented on HADOOP-9830:


SUCCESS: Integrated in Hadoop-Hdfs-trunk #1656 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1656/])
HADOOP-9830. Fix typo at http://hadoop.apache.org/docs/current/ (Contributed by 
Kousuke Saruta) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1561951)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-project/src/site/apt/index.apt.vm


> Typo at http://hadoop.apache.org/docs/current/
> --
>
> Key: HADOOP-9830
> URL: https://issues.apache.org/jira/browse/HADOOP-9830
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.1.0-beta, 0.23.9, 2.0.6-alpha
>Reporter: Dmitry Lysnichenko
>Assignee: Kousuke Saruta
>Priority: Trivial
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HADOOP-9830.patch
>
>
> Strange symbols at http://hadoop.apache.org/docs/current/
> {code} 
> ApplicationMaster manages the application’s scheduling and coordination. 
> {code}
> Sorry for posting here, could not find any other way to report.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-9982) Fix dead links in hadoop site docs

2014-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884129#comment-13884129
 ] 

Hudson commented on HADOOP-9982:


SUCCESS: Integrated in Hadoop-Hdfs-trunk #1656 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1656/])
Fix correct CHANGES.txt for HADOOP-10086 and HADOOP-9982. (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1561819)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
HADOOP-9982. Fix dead links in hadoop site docs. (Contributed by Akira Ajisaka) 
(arp: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1561813)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/site/apt/Configuration.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/site/apt/index.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/CLIMiniCluster.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/ClusterSetup.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/CommandsManual.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/Compatibility.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/FileSystemShell.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/InterfaceClassification.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/ServiceLevelAuth.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/SingleCluster.apt.vm
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Fix dead links in hadoop site docs
> --
>
> Key: HADOOP-9982
> URL: https://issues.apache.org/jira/browse/HADOOP-9982
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.2.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HADOOP-9982.patch
>
>
> For example, the hyperlink 'Single Node Setup' doesn't work correctly in 
> ['Cluster Setup' 
> document|http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html].
> I also found other dead links. I'll try to fix them.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10203) Connection leak in Jets3tNativeFileSystemStore#retrieveMetadata

2014-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884121#comment-13884121
 ] 

Hudson commented on HADOOP-10203:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1681 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1681/])
HADOOP-10203. Connection leak in Jets3tNativeFileSystemStore#retrieveMetadata. 
Contributed by Andrei Savu. (atm: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1561720)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/Jets3tNativeFileSystemStore.java


> Connection leak in Jets3tNativeFileSystemStore#retrieveMetadata 
> 
>
> Key: HADOOP-10203
> URL: https://issues.apache.org/jira/browse/HADOOP-10203
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
> Environment: CDH 2.0.0-cdh4.5.0 
> (30821ec616ee7a21ee8447949b7c6208a8f1e7d8) 
>Reporter: Andrei Savu
>Assignee: Andrei Savu
> Fix For: 2.4.0
>
> Attachments: HADOOP-10203-trunk.patch, HADOOP-10203.patch
>
>
> Jets3tNativeFileSystemStore#retrieveMetadata  is leaking connections. 
> This affects any client that tries to read many small files very quickly 
> (e.g. distcp from s3 to hdfs with small files blocks due to connection pool 
> starvation). 
> This is not a problem for larger files because when the GC runs any 
> connection that's out of scope will be released in #finalize().
> We are seeing the following log messages as a symptom of this problem:
> {noformat}
> 13/12/26 13:40:01 WARN httpclient.HttpMethodReleaseInputStream: Attempting to 
> release HttpMethod in finalize() as its response data stream has gone out of 
> scope. This attempt will not always succeed and cannot be relied upon! Please 
> ensure response data streams are always fully consumed or closed to avoid 
> HTTP connection starvation.
> 13/12/26 13:40:01 WARN httpclient.HttpMethodReleaseInputStream: Successfully 
> released HttpMethod in finalize(). You were lucky this time... Please ensure 
> response data streams are always fully consumed or closed.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10288) Explicit reference to Log4JLogger breaks non-log4j users

2014-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884127#comment-13884127
 ] 

Hudson commented on HADOOP-10288:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1681 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1681/])
HADOOP-10288. Explicit reference to Log4JLogger breaks non-log4j users. 
Contributed by Todd Lipcon. (todd: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1561882)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpRequestLog.java


> Explicit reference to Log4JLogger breaks non-log4j users
> 
>
> Key: HADOOP-10288
> URL: https://issues.apache.org/jira/browse/HADOOP-10288
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.4.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Fix For: 2.4.0
>
> Attachments: hadoop-10288.txt
>
>
> In HttpRequestLog, we make an explicit reference to the Log4JLogger class for 
> an instanceof check. If the log4j implementation isn't actually on the 
> classpath, the instanceof check throws NoClassDefFoundError instead of 
> returning false. This means that dependent projects that don't use log4j can 
> no longer embed HttpServer -- typically this is an issue when they use 
> MiniDFSCluster as part of their testing.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-9830) Typo at http://hadoop.apache.org/docs/current/

2014-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884114#comment-13884114
 ] 

Hudson commented on HADOOP-9830:


FAILURE: Integrated in Hadoop-Mapreduce-trunk #1681 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1681/])
HADOOP-9830. Fix typo at http://hadoop.apache.org/docs/current/ (Contributed by 
Kousuke Saruta) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1561951)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-project/src/site/apt/index.apt.vm


> Typo at http://hadoop.apache.org/docs/current/
> --
>
> Key: HADOOP-9830
> URL: https://issues.apache.org/jira/browse/HADOOP-9830
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.1.0-beta, 0.23.9, 2.0.6-alpha
>Reporter: Dmitry Lysnichenko
>Assignee: Kousuke Saruta
>Priority: Trivial
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HADOOP-9830.patch
>
>
> Strange symbols at http://hadoop.apache.org/docs/current/
> {code} 
> ApplicationMaster manages the application’s scheduling and coordination. 
> {code}
> Sorry for posting here, could not find any other way to report.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10255) Rename HttpServer to HttpServer2 to retain older HttpServer in branch-2 for compatibility

2014-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884123#comment-13884123
 ] 

Hudson commented on HADOOP-10255:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1681 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1681/])
HADOOP-10255. Adding missed CHANGES.txt from change 1561959. (suresh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1561961)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
HADOOP-10255. Rename HttpServer to HttpServer2 to retain older HttpServer in 
branch-2 for compatibility. Contributed by Haohui Mai. (suresh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1561959)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ConfServlet.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/AdminAuthorizedServlet.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/jmx/JMXJsonServlet.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/log/LogLevel.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics/MetricsServlet.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/AuthenticationFilterInitializer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/HttpServerFunctionalTest.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestGlobalFilter.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHtmlQuoting.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServerLifecycle.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServerWebapps.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestPathFilter.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestSSLHttpServer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestServletFilter.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/jmx/TestJMXJsonServlet.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/log/TestLogLevel.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestAuthenticationFilter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/SecureDataNodeStarter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/GetImageServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestGetImageServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestTransferFsImage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotTestHelper.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestJobEndNotifier.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestJobEndNotifier.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main

[jira] [Commented] (HADOOP-10086) User document for authentication in secure cluster

2014-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884126#comment-13884126
 ] 

Hudson commented on HADOOP-10086:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1681 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1681/])
Fix correct CHANGES.txt for HADOOP-10086 and HADOOP-9982. (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1561819)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
HADOOP-10086. User document for authentication in secure cluster. (Contributed 
by Masatake Iwasaki) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1561776)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/ClusterSetup.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/SecureMode.apt.vm
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* /hadoop/common/trunk/hadoop-project/src/site/site.xml


> User document for authentication in secure cluster
> --
>
> Key: HADOOP-10086
> URL: https://issues.apache.org/jira/browse/HADOOP-10086
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Masatake Iwasaki
>Priority: Minor
>  Labels: documentaion, security
> Fix For: 2.3.0
>
> Attachments: HADOOP-10086-0.patch, HADOOP-10086-1.patch, 
> HADOOP-10086-2.patch, HADOOP-10086-3.patch
>
>
> There are no independent section for basic security features such as 
> authentication and group mapping in the user documentation, though there are 
> sections for "Service Level Authorization" and "HTTP Authentication".
> Creating independent section for authentication and moving contents about 
> secure cluster currently residing in "Cluster Setup" section could be good 
> starting point.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10274) Lower the logging level from ERROR to WARN for UGI.doAs method

2014-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884120#comment-13884120
 ] 

Hudson commented on HADOOP-10274:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1681 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1681/])
HADOOP-10274 Lower the logging level from ERROR to WARN for UGI.doAs method 
(Takeshi Miao via stack) (stack: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1561934)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java


> Lower the logging level from ERROR to WARN for UGI.doAs method
> --
>
> Key: HADOOP-10274
> URL: https://issues.apache.org/jira/browse/HADOOP-10274
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 1.0.4
> Environment: hadoop-1.0.4, hbase-0.94.16, 
> krb5-server-1.6.1-31.el5_3.3, CentOS release 5.3 (Final)
>Reporter: takeshi.miao
>Assignee: takeshi.miao
>Priority: Minor
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HADOOP-10274-trunk-v01.patch
>
>
> Recently we got the error msg "Request is a replay (34) - PROCESS_TGS" while 
> we are using the HBase client API to put data into HBase-0.94.16 with 
> krb5-1.6.1 enabled. The related msg as follows...
> {code}
> [2014-01-15 
> 09:40:38,452][hbase-tablepool-1-thread-3][ERROR][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1124)):
>  PriviledgedActionException as:takeshi_miao@LAB 
> cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Request is a 
> replay (34) - PROCESS_TGS)]
> [2014-01-15 
> 09:40:38,453][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.logPriviledgedAction(UserGroupInformation.java:1143)):
>  PriviledgedAction as:takeshi_miao@LAB 
> from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  
> 
> [2014-01-15 
> 09:40:38,453][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.ipc.SecureClient](org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$1.run(SecureClient.java:213)):
>  Exception encountered while connecting to the server : 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Request is a 
> replay (34) - PROCESS_TGS)]
> [2014-01-15 09:40:38,454][hbase-tablepool-1-thread-3][INFO 
> ][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.reloginFromTicketCache(UserGroupInformation.java:657)):
>  Initiating logout for takeshi_miao@LAB
> [2014-01-15 
> 09:40:38,454][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.logout(UserGroupInformation.java:154)):
>  hadoop logout
> [2014-01-15 09:40:38,454][hbase-tablepool-1-thread-3][INFO 
> ][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.reloginFromTicketCache(UserGroupInformation.java:667)):
>  Initiating re-login for takeshi_miao@LAB
> [2014-01-15 
> 09:40:38,455][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.login(UserGroupInformation.java:146)):
>  hadoop login
> [2014-01-15 
> 09:40:38,456][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.commit(UserGroupInformation.java:95)):
>  hadoop login commit
> [2014-01-15 
> 09:40:38,456][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.commit(UserGroupInformation.java:100)):
>  using existing subject:[takeshi_miao@LAB, UnixPrincipal: takeshi_miao, 
> UnixNumericUserPrincipal: 501, UnixNumericGroupPrincipal [Primary Group]: 
> 501, UnixNumericGroupPrincipal [Supplementary Group]: 502, takeshi_miao@LAB]
> {code}
> Finally, we found that the HBase would doing the retry (5 * 10 times) and 
> recovery this _'request is a replay (34)'_ issue, but based on the HBase user 
> viewpoint, the error msg at first line may be frightful, as we were afraid 
> there was any data loss occurring at the first sight...
> {code}
> [2014-01-15 
> 09:40:38,452][hbase-tablepool-1-thread-3][ERROR]

[jira] [Commented] (HADOOP-10212) Incorrect compile command in Native Library document

2014-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884124#comment-13884124
 ] 

Hudson commented on HADOOP-10212:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1681 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1681/])
HADOOP-10212. Incorrect compile command in Native Library document. 
(Contributed by Akira Ajisaka) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1561838)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/NativeLibraries.apt.vm


> Incorrect compile command in Native Library document
> 
>
> Key: HADOOP-10212
> URL: https://issues.apache.org/jira/browse/HADOOP-10212
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.2.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>  Labels: newbie
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HADOOP-10212.patch
>
>
> The following old command still exists in Native Library document.
> {code}
>$ ant -Dcompile.native=true 
> {code}
> Now maven is used instead of ant.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10250) VersionUtil returns wrong value when comparing two versions

2014-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884125#comment-13884125
 ] 

Hudson commented on HADOOP-10250:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1681 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1681/])
HADOOP-10250. VersionUtil returns wrong value when comparing two versions. 
Contributed by Yongjun Zhang. (atm: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1561860)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/VersionUtil.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestVersionUtil.java


> VersionUtil returns wrong value when comparing two versions
> ---
>
> Key: HADOOP-10250
> URL: https://issues.apache.org/jira/browse/HADOOP-10250
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Fix For: 2.4.0
>
> Attachments: HADOOP-10250.001.patch, HADOOP-10250.002.patch, 
> HADOOP-10250.003.patch, HADOOP-10250.004.patch, HADOOP-10250.004.patch
>
>
> VersionUtil.compareVersions("1.0.0-beta-1", "1.0.0") returns 7 instead of 
> negative number, which is wrong, because 1.0.0-beta-1 older than 1.0.0.
>  



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-9982) Fix dead links in hadoop site docs

2014-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884112#comment-13884112
 ] 

Hudson commented on HADOOP-9982:


FAILURE: Integrated in Hadoop-Mapreduce-trunk #1681 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1681/])
Fix correct CHANGES.txt for HADOOP-10086 and HADOOP-9982. (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1561819)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
HADOOP-9982. Fix dead links in hadoop site docs. (Contributed by Akira Ajisaka) 
(arp: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1561813)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/site/apt/Configuration.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/site/apt/index.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/CLIMiniCluster.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/ClusterSetup.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/CommandsManual.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/Compatibility.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/FileSystemShell.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/InterfaceClassification.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/ServiceLevelAuth.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/SingleCluster.apt.vm
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Fix dead links in hadoop site docs
> --
>
> Key: HADOOP-9982
> URL: https://issues.apache.org/jira/browse/HADOOP-9982
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.2.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HADOOP-9982.patch
>
>
> For example, the hyperlink 'Single Node Setup' doesn't work correctly in 
> ['Cluster Setup' 
> document|http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html].
> I also found other dead links. I'll try to fix them.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10274) Lower the logging level from ERROR to WARN for UGI.doAs method

2014-01-28 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884034#comment-13884034
 ] 

Uma Maheswara Rao G commented on HADOOP-10274:
--

I have added Takeshi Miao to contributors list and assigned this JIRA to him.

> Lower the logging level from ERROR to WARN for UGI.doAs method
> --
>
> Key: HADOOP-10274
> URL: https://issues.apache.org/jira/browse/HADOOP-10274
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 1.0.4
> Environment: hadoop-1.0.4, hbase-0.94.16, 
> krb5-server-1.6.1-31.el5_3.3, CentOS release 5.3 (Final)
>Reporter: takeshi.miao
>Assignee: takeshi.miao
>Priority: Minor
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HADOOP-10274-trunk-v01.patch
>
>
> Recently we got the error msg "Request is a replay (34) - PROCESS_TGS" while 
> we are using the HBase client API to put data into HBase-0.94.16 with 
> krb5-1.6.1 enabled. The related msg as follows...
> {code}
> [2014-01-15 
> 09:40:38,452][hbase-tablepool-1-thread-3][ERROR][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1124)):
>  PriviledgedActionException as:takeshi_miao@LAB 
> cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Request is a 
> replay (34) - PROCESS_TGS)]
> [2014-01-15 
> 09:40:38,453][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.logPriviledgedAction(UserGroupInformation.java:1143)):
>  PriviledgedAction as:takeshi_miao@LAB 
> from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  
> 
> [2014-01-15 
> 09:40:38,453][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.ipc.SecureClient](org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$1.run(SecureClient.java:213)):
>  Exception encountered while connecting to the server : 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Request is a 
> replay (34) - PROCESS_TGS)]
> [2014-01-15 09:40:38,454][hbase-tablepool-1-thread-3][INFO 
> ][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.reloginFromTicketCache(UserGroupInformation.java:657)):
>  Initiating logout for takeshi_miao@LAB
> [2014-01-15 
> 09:40:38,454][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.logout(UserGroupInformation.java:154)):
>  hadoop logout
> [2014-01-15 09:40:38,454][hbase-tablepool-1-thread-3][INFO 
> ][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.reloginFromTicketCache(UserGroupInformation.java:667)):
>  Initiating re-login for takeshi_miao@LAB
> [2014-01-15 
> 09:40:38,455][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.login(UserGroupInformation.java:146)):
>  hadoop login
> [2014-01-15 
> 09:40:38,456][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.commit(UserGroupInformation.java:95)):
>  hadoop login commit
> [2014-01-15 
> 09:40:38,456][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.commit(UserGroupInformation.java:100)):
>  using existing subject:[takeshi_miao@LAB, UnixPrincipal: takeshi_miao, 
> UnixNumericUserPrincipal: 501, UnixNumericGroupPrincipal [Primary Group]: 
> 501, UnixNumericGroupPrincipal [Supplementary Group]: 502, takeshi_miao@LAB]
> {code}
> Finally, we found that the HBase would doing the retry (5 * 10 times) and 
> recovery this _'request is a replay (34)'_ issue, but based on the HBase user 
> viewpoint, the error msg at first line may be frightful, as we were afraid 
> there was any data loss occurring at the first sight...
> {code}
> [2014-01-15 
> 09:40:38,452][hbase-tablepool-1-thread-3][ERROR][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1124)):
>  PriviledgedActionException as:takeshi_miao@LAB 
> cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Request is a 
> replay (34) - PROCESS_TGS)]
> {code}
> So I'd like to su

[jira] [Updated] (HADOOP-10274) Lower the logging level from ERROR to WARN for UGI.doAs method

2014-01-28 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HADOOP-10274:
-

Assignee: takeshi.miao

> Lower the logging level from ERROR to WARN for UGI.doAs method
> --
>
> Key: HADOOP-10274
> URL: https://issues.apache.org/jira/browse/HADOOP-10274
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 1.0.4
> Environment: hadoop-1.0.4, hbase-0.94.16, 
> krb5-server-1.6.1-31.el5_3.3, CentOS release 5.3 (Final)
>Reporter: takeshi.miao
>Assignee: takeshi.miao
>Priority: Minor
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HADOOP-10274-trunk-v01.patch
>
>
> Recently we got the error msg "Request is a replay (34) - PROCESS_TGS" while 
> we are using the HBase client API to put data into HBase-0.94.16 with 
> krb5-1.6.1 enabled. The related msg as follows...
> {code}
> [2014-01-15 
> 09:40:38,452][hbase-tablepool-1-thread-3][ERROR][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1124)):
>  PriviledgedActionException as:takeshi_miao@LAB 
> cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Request is a 
> replay (34) - PROCESS_TGS)]
> [2014-01-15 
> 09:40:38,453][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.logPriviledgedAction(UserGroupInformation.java:1143)):
>  PriviledgedAction as:takeshi_miao@LAB 
> from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  
> 
> [2014-01-15 
> 09:40:38,453][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.ipc.SecureClient](org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$1.run(SecureClient.java:213)):
>  Exception encountered while connecting to the server : 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Request is a 
> replay (34) - PROCESS_TGS)]
> [2014-01-15 09:40:38,454][hbase-tablepool-1-thread-3][INFO 
> ][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.reloginFromTicketCache(UserGroupInformation.java:657)):
>  Initiating logout for takeshi_miao@LAB
> [2014-01-15 
> 09:40:38,454][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.logout(UserGroupInformation.java:154)):
>  hadoop logout
> [2014-01-15 09:40:38,454][hbase-tablepool-1-thread-3][INFO 
> ][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.reloginFromTicketCache(UserGroupInformation.java:667)):
>  Initiating re-login for takeshi_miao@LAB
> [2014-01-15 
> 09:40:38,455][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.login(UserGroupInformation.java:146)):
>  hadoop login
> [2014-01-15 
> 09:40:38,456][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.commit(UserGroupInformation.java:95)):
>  hadoop login commit
> [2014-01-15 
> 09:40:38,456][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.commit(UserGroupInformation.java:100)):
>  using existing subject:[takeshi_miao@LAB, UnixPrincipal: takeshi_miao, 
> UnixNumericUserPrincipal: 501, UnixNumericGroupPrincipal [Primary Group]: 
> 501, UnixNumericGroupPrincipal [Supplementary Group]: 502, takeshi_miao@LAB]
> {code}
> Finally, we found that the HBase would doing the retry (5 * 10 times) and 
> recovery this _'request is a replay (34)'_ issue, but based on the HBase user 
> viewpoint, the error msg at first line may be frightful, as we were afraid 
> there was any data loss occurring at the first sight...
> {code}
> [2014-01-15 
> 09:40:38,452][hbase-tablepool-1-thread-3][ERROR][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1124)):
>  PriviledgedActionException as:takeshi_miao@LAB 
> cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Request is a 
> replay (34) - PROCESS_TGS)]
> {code}
> So I'd like to suggest to change the logging level from '_ERROR_' to '_WARN_' 
> for 
> _o.a.hadoop.security.UserGroupInforma

[jira] [Commented] (HADOOP-10255) Rename HttpServer to HttpServer2 to retain older HttpServer in branch-2 for compatibility

2014-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884017#comment-13884017
 ] 

Hudson commented on HADOOP-10255:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #464 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/464/])
HADOOP-10255. Adding missed CHANGES.txt from change 1561959. (suresh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1561961)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
HADOOP-10255. Rename HttpServer to HttpServer2 to retain older HttpServer in 
branch-2 for compatibility. Contributed by Haohui Mai. (suresh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1561959)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ConfServlet.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/AdminAuthorizedServlet.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/jmx/JMXJsonServlet.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/log/LogLevel.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics/MetricsServlet.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/AuthenticationFilterInitializer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/HttpServerFunctionalTest.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestGlobalFilter.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHtmlQuoting.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServerLifecycle.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServerWebapps.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestPathFilter.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestSSLHttpServer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestServletFilter.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/jmx/TestJMXJsonServlet.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/log/TestLogLevel.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestAuthenticationFilter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/SecureDataNodeStarter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/GetImageServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestGetImageServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestTransferFsImage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotTestHelper.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestJobEndNotifier.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestJobEndNotifier.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/ap

[jira] [Commented] (HADOOP-10274) Lower the logging level from ERROR to WARN for UGI.doAs method

2014-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884014#comment-13884014
 ] 

Hudson commented on HADOOP-10274:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #464 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/464/])
HADOOP-10274 Lower the logging level from ERROR to WARN for UGI.doAs method 
(Takeshi Miao via stack) (stack: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1561934)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java


> Lower the logging level from ERROR to WARN for UGI.doAs method
> --
>
> Key: HADOOP-10274
> URL: https://issues.apache.org/jira/browse/HADOOP-10274
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 1.0.4
> Environment: hadoop-1.0.4, hbase-0.94.16, 
> krb5-server-1.6.1-31.el5_3.3, CentOS release 5.3 (Final)
>Reporter: takeshi.miao
>Priority: Minor
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HADOOP-10274-trunk-v01.patch
>
>
> Recently we got the error msg "Request is a replay (34) - PROCESS_TGS" while 
> we are using the HBase client API to put data into HBase-0.94.16 with 
> krb5-1.6.1 enabled. The related msg as follows...
> {code}
> [2014-01-15 
> 09:40:38,452][hbase-tablepool-1-thread-3][ERROR][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1124)):
>  PriviledgedActionException as:takeshi_miao@LAB 
> cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Request is a 
> replay (34) - PROCESS_TGS)]
> [2014-01-15 
> 09:40:38,453][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.logPriviledgedAction(UserGroupInformation.java:1143)):
>  PriviledgedAction as:takeshi_miao@LAB 
> from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  
> 
> [2014-01-15 
> 09:40:38,453][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.ipc.SecureClient](org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$1.run(SecureClient.java:213)):
>  Exception encountered while connecting to the server : 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Request is a 
> replay (34) - PROCESS_TGS)]
> [2014-01-15 09:40:38,454][hbase-tablepool-1-thread-3][INFO 
> ][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.reloginFromTicketCache(UserGroupInformation.java:657)):
>  Initiating logout for takeshi_miao@LAB
> [2014-01-15 
> 09:40:38,454][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.logout(UserGroupInformation.java:154)):
>  hadoop logout
> [2014-01-15 09:40:38,454][hbase-tablepool-1-thread-3][INFO 
> ][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.reloginFromTicketCache(UserGroupInformation.java:667)):
>  Initiating re-login for takeshi_miao@LAB
> [2014-01-15 
> 09:40:38,455][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.login(UserGroupInformation.java:146)):
>  hadoop login
> [2014-01-15 
> 09:40:38,456][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.commit(UserGroupInformation.java:95)):
>  hadoop login commit
> [2014-01-15 
> 09:40:38,456][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.commit(UserGroupInformation.java:100)):
>  using existing subject:[takeshi_miao@LAB, UnixPrincipal: takeshi_miao, 
> UnixNumericUserPrincipal: 501, UnixNumericGroupPrincipal [Primary Group]: 
> 501, UnixNumericGroupPrincipal [Supplementary Group]: 502, takeshi_miao@LAB]
> {code}
> Finally, we found that the HBase would doing the retry (5 * 10 times) and 
> recovery this _'request is a replay (34)'_ issue, but based on the HBase user 
> viewpoint, the error msg at first line may be frightful, as we were afraid 
> there was any data loss occurring at the first sight...
> {code}
> [2014-01-15 
> 09:40:38,452][hbase-tablepool-1-thread-3][ERROR][org.apache.hadoop.security.UserGroupInformation

[jira] [Commented] (HADOOP-10086) User document for authentication in secure cluster

2014-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884020#comment-13884020
 ] 

Hudson commented on HADOOP-10086:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #464 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/464/])
Fix correct CHANGES.txt for HADOOP-10086 and HADOOP-9982. (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1561819)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
HADOOP-10086. User document for authentication in secure cluster. (Contributed 
by Masatake Iwasaki) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1561776)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/ClusterSetup.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/SecureMode.apt.vm
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* /hadoop/common/trunk/hadoop-project/src/site/site.xml


> User document for authentication in secure cluster
> --
>
> Key: HADOOP-10086
> URL: https://issues.apache.org/jira/browse/HADOOP-10086
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Masatake Iwasaki
>Priority: Minor
>  Labels: documentaion, security
> Fix For: 2.3.0
>
> Attachments: HADOOP-10086-0.patch, HADOOP-10086-1.patch, 
> HADOOP-10086-2.patch, HADOOP-10086-3.patch
>
>
> There are no independent section for basic security features such as 
> authentication and group mapping in the user documentation, though there are 
> sections for "Service Level Authorization" and "HTTP Authentication".
> Creating independent section for authentication and moving contents about 
> secure cluster currently residing in "Cluster Setup" section could be good 
> starting point.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10250) VersionUtil returns wrong value when comparing two versions

2014-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884019#comment-13884019
 ] 

Hudson commented on HADOOP-10250:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #464 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/464/])
HADOOP-10250. VersionUtil returns wrong value when comparing two versions. 
Contributed by Yongjun Zhang. (atm: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1561860)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/VersionUtil.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestVersionUtil.java


> VersionUtil returns wrong value when comparing two versions
> ---
>
> Key: HADOOP-10250
> URL: https://issues.apache.org/jira/browse/HADOOP-10250
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Fix For: 2.4.0
>
> Attachments: HADOOP-10250.001.patch, HADOOP-10250.002.patch, 
> HADOOP-10250.003.patch, HADOOP-10250.004.patch, HADOOP-10250.004.patch
>
>
> VersionUtil.compareVersions("1.0.0-beta-1", "1.0.0") returns 7 instead of 
> negative number, which is wrong, because 1.0.0-beta-1 older than 1.0.0.
>  



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10203) Connection leak in Jets3tNativeFileSystemStore#retrieveMetadata

2014-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884015#comment-13884015
 ] 

Hudson commented on HADOOP-10203:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #464 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/464/])
HADOOP-10203. Connection leak in Jets3tNativeFileSystemStore#retrieveMetadata. 
Contributed by Andrei Savu. (atm: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1561720)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/Jets3tNativeFileSystemStore.java


> Connection leak in Jets3tNativeFileSystemStore#retrieveMetadata 
> 
>
> Key: HADOOP-10203
> URL: https://issues.apache.org/jira/browse/HADOOP-10203
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
> Environment: CDH 2.0.0-cdh4.5.0 
> (30821ec616ee7a21ee8447949b7c6208a8f1e7d8) 
>Reporter: Andrei Savu
>Assignee: Andrei Savu
> Fix For: 2.4.0
>
> Attachments: HADOOP-10203-trunk.patch, HADOOP-10203.patch
>
>
> Jets3tNativeFileSystemStore#retrieveMetadata  is leaking connections. 
> This affects any client that tries to read many small files very quickly 
> (e.g. distcp from s3 to hdfs with small files blocks due to connection pool 
> starvation). 
> This is not a problem for larger files because when the GC runs any 
> connection that's out of scope will be released in #finalize().
> We are seeing the following log messages as a symptom of this problem:
> {noformat}
> 13/12/26 13:40:01 WARN httpclient.HttpMethodReleaseInputStream: Attempting to 
> release HttpMethod in finalize() as its response data stream has gone out of 
> scope. This attempt will not always succeed and cannot be relied upon! Please 
> ensure response data streams are always fully consumed or closed to avoid 
> HTTP connection starvation.
> 13/12/26 13:40:01 WARN httpclient.HttpMethodReleaseInputStream: Successfully 
> released HttpMethod in finalize(). You were lucky this time... Please ensure 
> response data streams are always fully consumed or closed.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10212) Incorrect compile command in Native Library document

2014-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884018#comment-13884018
 ] 

Hudson commented on HADOOP-10212:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #464 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/464/])
HADOOP-10212. Incorrect compile command in Native Library document. 
(Contributed by Akira Ajisaka) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1561838)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/NativeLibraries.apt.vm


> Incorrect compile command in Native Library document
> 
>
> Key: HADOOP-10212
> URL: https://issues.apache.org/jira/browse/HADOOP-10212
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.2.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>  Labels: newbie
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HADOOP-10212.patch
>
>
> The following old command still exists in Native Library document.
> {code}
>$ ant -Dcompile.native=true 
> {code}
> Now maven is used instead of ant.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10288) Explicit reference to Log4JLogger breaks non-log4j users

2014-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884021#comment-13884021
 ] 

Hudson commented on HADOOP-10288:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #464 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/464/])
HADOOP-10288. Explicit reference to Log4JLogger breaks non-log4j users. 
Contributed by Todd Lipcon. (todd: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1561882)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpRequestLog.java


> Explicit reference to Log4JLogger breaks non-log4j users
> 
>
> Key: HADOOP-10288
> URL: https://issues.apache.org/jira/browse/HADOOP-10288
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.4.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Fix For: 2.4.0
>
> Attachments: hadoop-10288.txt
>
>
> In HttpRequestLog, we make an explicit reference to the Log4JLogger class for 
> an instanceof check. If the log4j implementation isn't actually on the 
> classpath, the instanceof check throws NoClassDefFoundError instead of 
> returning false. This means that dependent projects that don't use log4j can 
> no longer embed HttpServer -- typically this is an issue when they use 
> MiniDFSCluster as part of their testing.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-9830) Typo at http://hadoop.apache.org/docs/current/

2014-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884008#comment-13884008
 ] 

Hudson commented on HADOOP-9830:


FAILURE: Integrated in Hadoop-Yarn-trunk #464 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/464/])
HADOOP-9830. Fix typo at http://hadoop.apache.org/docs/current/ (Contributed by 
Kousuke Saruta) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1561951)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-project/src/site/apt/index.apt.vm


> Typo at http://hadoop.apache.org/docs/current/
> --
>
> Key: HADOOP-9830
> URL: https://issues.apache.org/jira/browse/HADOOP-9830
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.1.0-beta, 0.23.9, 2.0.6-alpha
>Reporter: Dmitry Lysnichenko
>Assignee: Kousuke Saruta
>Priority: Trivial
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HADOOP-9830.patch
>
>
> Strange symbols at http://hadoop.apache.org/docs/current/
> {code} 
> ApplicationMaster manages the application’s scheduling and coordination. 
> {code}
> Sorry for posting here, could not find any other way to report.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-9982) Fix dead links in hadoop site docs

2014-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884006#comment-13884006
 ] 

Hudson commented on HADOOP-9982:


FAILURE: Integrated in Hadoop-Yarn-trunk #464 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/464/])
Fix correct CHANGES.txt for HADOOP-10086 and HADOOP-9982. (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1561819)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
HADOOP-9982. Fix dead links in hadoop site docs. (Contributed by Akira Ajisaka) 
(arp: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1561813)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/site/apt/Configuration.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/site/apt/index.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/CLIMiniCluster.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/ClusterSetup.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/CommandsManual.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/Compatibility.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/FileSystemShell.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/InterfaceClassification.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/ServiceLevelAuth.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/SingleCluster.apt.vm
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Fix dead links in hadoop site docs
> --
>
> Key: HADOOP-9982
> URL: https://issues.apache.org/jira/browse/HADOOP-9982
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.2.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HADOOP-9982.patch
>
>
> For example, the hyperlink 'Single Node Setup' doesn't work correctly in 
> ['Cluster Setup' 
> document|http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html].
> I also found other dead links. I'll try to fix them.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-9410) S3 filesystem hangs on FileSystem.listFiles()

2014-01-28 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13883942#comment-13883942
 ] 

Steve Loughran commented on HADOOP-9410:


Good question: we should rerun the tests now and see if it is still there.



> S3 filesystem hangs on FileSystem.listFiles()
> -
>
> Key: HADOOP-9410
> URL: https://issues.apache.org/jira/browse/HADOOP-9410
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.0.0
> Environment: Talking to S3 West Coast and S3 EU
>Reporter: Steve Loughran
> Attachments: HADOOP-9410.patch
>
>
> A test in HADOOP-9258 of the Hadoop 2+ API call {{FileSystem.listFiles()}} 
> hangs repeatedly when using the {{s3://}} filesystem



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10295) Allow distcp to automatically identify the checksum type of source files and use it for the target

2014-01-28 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13883889#comment-13883889
 ] 

Jing Zhao commented on HADOOP-10295:


Besides the concern on FileChecksum, some other comments on the current patch:
# We may want to change "checksum" to "checksumtype" in the changes of 
PRESERVE_STATUS and FileAttribute.
# We actually do not need to pass a FileChecksum to RetriableFileCopyCommand. 
In RetriableFileCopyCommand#doCopy, if we need to preserve the checksum type, 
we get the checksum type of the source file and we reuse this checksum in 
compareCheckSums(). In that case we only need to call sourceFS.getFileChecksum 
once (note that getFileChecksum is very costly).
# We should use 
"FsPermission.getFileDefault().applyUMask(FsPermission.getUMask(getConf()))" in 
the following change (see FileSystem#create(Path, boolean, int, short, long, 
Progressable))
{code}
-tmpTargetPath, true, BUFFER_SIZE,
+tmpTargetPath, FsPermission.getFileDefault(), 
+EnumSet.of(CreateFlag.CREATE, CreateFlag.OVERWRITE), BUFFER_SIZE,
{code}
# The new added unit test does not cover there scenario where source files have 
different REAL checksum types (CRC32 and CRC32C), in which case copy with 
preserving checksum type should succeed and the original checksum types should 
be preserved in the target FS. We should add unit tests for this. 
# There are some unnecessary whilespace and blank line changes.

> Allow distcp to automatically identify the checksum type of source files and 
> use it for the target
> --
>
> Key: HADOOP-10295
> URL: https://issues.apache.org/jira/browse/HADOOP-10295
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HADOOP-10295.000.patch, hadoop-10295.patch
>
>
> Currently while doing distcp, users can use "-Ddfs.checksum.type" to specify 
> the checksum type in the target FS. This works fine if all the source files 
> are using the same checksum type. If files in the source cluster have mixed 
> types of checksum, users have to either use "-skipcrccheck" or have checksum 
> mismatching exception. Thus we may need to consider adding a new option to 
> distcp so that it can automatically identify the original checksum type of 
> each source file and use the same checksum type in the target FS. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)