[jira] [Commented] (HADOOP-15121) Encounter NullPointerException when using DecayRpcScheduler

2018-01-18 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16331801#comment-16331801
 ] 

genericqa commented on HADOOP-15121:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 24 unchanged - 1 fixed = 24 total (was 25) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 49s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
57s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
35s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 82m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15121 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906763/HADOOP-15121.007.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 990e4031516b 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / bc93ac2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13998/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13998/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 1712 (vs. ulimit of 5000) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13998/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apac

[jira] [Comment Edited] (HADOOP-12502) SetReplication OutOfMemoryError

2018-01-18 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16309507#comment-16309507
 ] 

Vinayakumar B edited comment on HADOOP-12502 at 1/19/18 6:06 AM:
-

Fixed the testcase by keeping old way (non-iterator and sorted) for *-getmerge* 
command.

Without this change test fails due to change in the order of elements in 
listStatusIterator().
 LocalFileSystem returns sorted items in Windows, but it might not be sorted in 
linux. So test failing.


was (Author: vinayrpet):
Fixed the testcase by keeping old way (non-iterator and sorted) for *-getmerge* 
command.

Without this change test fails due to change in the order of elements in 
listStatusIterator().
LocalFileSystem returns sorted items in Windows, but it will be same in linux. 
So test failing.

> SetReplication OutOfMemoryError
> ---
>
> Key: HADOOP-12502
> URL: https://issues.apache.org/jira/browse/HADOOP-12502
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Philipp Schuegerl
>Assignee: Vinayakumar B
>Priority: Major
> Attachments: HADOOP-12502-01.patch, HADOOP-12502-02.patch, 
> HADOOP-12502-03.patch, HADOOP-12502-04.patch, HADOOP-12502-05.patch, 
> HADOOP-12502-06.patch, HADOOP-12502-07.patch, HADOOP-12502-08.patch, 
> HADOOP-12502-09.patch
>
>
> Setting the replication of a HDFS folder recursively can run out of memory. 
> E.g. with a large /var/log directory:
> hdfs dfs -setrep -R -w 1 /var/log
> Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit 
> exceeded
>   at java.util.Arrays.copyOfRange(Arrays.java:2694)
>   at java.lang.String.(String.java:203)
>   at java.lang.String.substring(String.java:1913)
>   at java.net.URI$Parser.substring(URI.java:2850)
>   at java.net.URI$Parser.parse(URI.java:3046)
>   at java.net.URI.(URI.java:753)
>   at org.apache.hadoop.fs.Path.initialize(Path.java:203)
>   at org.apache.hadoop.fs.Path.(Path.java:116)
>   at org.apache.hadoop.fs.Path.(Path.java:94)
>   at 
> org.apache.hadoop.hdfs.protocol.HdfsFileStatus.getFullPath(HdfsFileStatus.java:222)
>   at 
> org.apache.hadoop.hdfs.protocol.HdfsFileStatus.makeQualified(HdfsFileStatus.java:246)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:689)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:102)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:712)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:708)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:708)
>   at 
> org.apache.hadoop.fs.shell.PathData.getDirectoryContents(PathData.java:268)
>   at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347)
>   at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308)
>   at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347)
>   at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308)
>   at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347)
>   at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308)
>   at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347)
>   at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308)
>   at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347)
>   at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308)
>   at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:278)
>   at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:260)
>   at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:244)
>   at 
> org.apache.hadoop.fs.shell.SetReplication.processArguments(SetReplication.java:76)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12502) SetReplication OutOfMemoryError

2018-01-18 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16331788#comment-16331788
 ] 

Vinayakumar B commented on HADOOP-12502:


Hi [~fabbri], does latest patch update is fine for you?

 

> SetReplication OutOfMemoryError
> ---
>
> Key: HADOOP-12502
> URL: https://issues.apache.org/jira/browse/HADOOP-12502
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Philipp Schuegerl
>Assignee: Vinayakumar B
>Priority: Major
> Attachments: HADOOP-12502-01.patch, HADOOP-12502-02.patch, 
> HADOOP-12502-03.patch, HADOOP-12502-04.patch, HADOOP-12502-05.patch, 
> HADOOP-12502-06.patch, HADOOP-12502-07.patch, HADOOP-12502-08.patch, 
> HADOOP-12502-09.patch
>
>
> Setting the replication of a HDFS folder recursively can run out of memory. 
> E.g. with a large /var/log directory:
> hdfs dfs -setrep -R -w 1 /var/log
> Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit 
> exceeded
>   at java.util.Arrays.copyOfRange(Arrays.java:2694)
>   at java.lang.String.(String.java:203)
>   at java.lang.String.substring(String.java:1913)
>   at java.net.URI$Parser.substring(URI.java:2850)
>   at java.net.URI$Parser.parse(URI.java:3046)
>   at java.net.URI.(URI.java:753)
>   at org.apache.hadoop.fs.Path.initialize(Path.java:203)
>   at org.apache.hadoop.fs.Path.(Path.java:116)
>   at org.apache.hadoop.fs.Path.(Path.java:94)
>   at 
> org.apache.hadoop.hdfs.protocol.HdfsFileStatus.getFullPath(HdfsFileStatus.java:222)
>   at 
> org.apache.hadoop.hdfs.protocol.HdfsFileStatus.makeQualified(HdfsFileStatus.java:246)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:689)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:102)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:712)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:708)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:708)
>   at 
> org.apache.hadoop.fs.shell.PathData.getDirectoryContents(PathData.java:268)
>   at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347)
>   at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308)
>   at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347)
>   at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308)
>   at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347)
>   at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308)
>   at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347)
>   at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308)
>   at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347)
>   at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308)
>   at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:278)
>   at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:260)
>   at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:244)
>   at 
> org.apache.hadoop.fs.shell.SetReplication.processArguments(SetReplication.java:76)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15143) NPE due to Invalid KerberosTicket in UGI

2018-01-18 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16331753#comment-16331753
 ] 

Wei-Chiu Chuang commented on HADOOP-15143:
--

Just cherry picked into 2.8, 2.7 and 2.6. There were very minor conflict in 
spaces in warning log message.

> NPE due to Invalid KerberosTicket in UGI
> 
>
> Key: HADOOP-15143
> URL: https://issues.apache.org/jira/browse/HADOOP-15143
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.4
>Reporter: Jitendra Nath Pandey
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 2.6.6, 3.1.0, 2.10.0, 2.9.1, 3.0.1, 2.8.4, 2.7.6
>
> Attachments: HADOOP-15143-branch-2.001.patch, HADOOP-15143.001.patch, 
> HADOOP-15143.002.patch
>
>
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.security.UserGroupInformation.fixKerberosTicketOrder(UserGroupInformation.java:1170)
>  
>   at 
> org.apache.hadoop.security.UserGroupInformation.reloginFromKeytab(UserGroupInformation.java:1247)
>  
>   at 
> org.apache.hadoop.security.UserGroupInformation.checkTGTAndReloginFromKeytab(UserGroupInformation.java:1157)
>  
> {code}
> It could be related to jdk issue
> http://hg.openjdk.java.net/jdk8u/jdk8u-dev/jdk/rev/fd0e0898721c



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15143) NPE due to Invalid KerberosTicket in UGI

2018-01-18 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-15143:
-
Fix Version/s: 2.6.6

> NPE due to Invalid KerberosTicket in UGI
> 
>
> Key: HADOOP-15143
> URL: https://issues.apache.org/jira/browse/HADOOP-15143
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.4
>Reporter: Jitendra Nath Pandey
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 2.6.6, 3.1.0, 2.10.0, 2.9.1, 3.0.1, 2.8.4, 2.7.6
>
> Attachments: HADOOP-15143-branch-2.001.patch, HADOOP-15143.001.patch, 
> HADOOP-15143.002.patch
>
>
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.security.UserGroupInformation.fixKerberosTicketOrder(UserGroupInformation.java:1170)
>  
>   at 
> org.apache.hadoop.security.UserGroupInformation.reloginFromKeytab(UserGroupInformation.java:1247)
>  
>   at 
> org.apache.hadoop.security.UserGroupInformation.checkTGTAndReloginFromKeytab(UserGroupInformation.java:1157)
>  
> {code}
> It could be related to jdk issue
> http://hg.openjdk.java.net/jdk8u/jdk8u-dev/jdk/rev/fd0e0898721c



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15143) NPE due to Invalid KerberosTicket in UGI

2018-01-18 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-15143:
-
Fix Version/s: 2.7.6
   2.8.4

> NPE due to Invalid KerberosTicket in UGI
> 
>
> Key: HADOOP-15143
> URL: https://issues.apache.org/jira/browse/HADOOP-15143
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.4
>Reporter: Jitendra Nath Pandey
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1, 2.8.4, 2.7.6
>
> Attachments: HADOOP-15143-branch-2.001.patch, HADOOP-15143.001.patch, 
> HADOOP-15143.002.patch
>
>
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.security.UserGroupInformation.fixKerberosTicketOrder(UserGroupInformation.java:1170)
>  
>   at 
> org.apache.hadoop.security.UserGroupInformation.reloginFromKeytab(UserGroupInformation.java:1247)
>  
>   at 
> org.apache.hadoop.security.UserGroupInformation.checkTGTAndReloginFromKeytab(UserGroupInformation.java:1157)
>  
> {code}
> It could be related to jdk issue
> http://hg.openjdk.java.net/jdk8u/jdk8u-dev/jdk/rev/fd0e0898721c



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15181) Typo in SecureMode.md

2018-01-18 Thread Masahiro Tanaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16331744#comment-16331744
 ] 

Masahiro Tanaka commented on HADOOP-15181:
--

Could anyone review this?

> Typo in SecureMode.md
> -
>
> Key: HADOOP-15181
> URL: https://issues.apache.org/jira/browse/HADOOP-15181
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Masahiro Tanaka
>Assignee: Masahiro Tanaka
>Priority: Trivial
> Fix For: 3.1.0, 2.9.1, 3.0.1
>
> Attachments: HADOOP-15181.00.patch
>
>
> https://github.com/apache/hadoop/blame/08332e12d055d85472f0c9371fefe9b56bfea1ed/hadoop-common-project/hadoop-common/src/site/markdown/SecureMode.md#L575
> "<" should be unescaped.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15121) Encounter NullPointerException when using DecayRpcScheduler

2018-01-18 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated HADOOP-15121:
-
Attachment: HADOOP-15121.007.patch

> Encounter NullPointerException when using DecayRpcScheduler
> ---
>
> Key: HADOOP-15121
> URL: https://issues.apache.org/jira/browse/HADOOP-15121
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.2
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Major
> Attachments: HADOOP-15121.001.patch, HADOOP-15121.002.patch, 
> HADOOP-15121.003.patch, HADOOP-15121.004.patch, HADOOP-15121.005.patch, 
> HADOOP-15121.006.patch, HADOOP-15121.007.patch
>
>
> I set ipc.8020.scheduler.impl to org.apache.hadoop.ipc.DecayRpcScheduler, but 
> got excetion in namenode:
> {code}
> 2017-12-15 15:26:34,662 ERROR impl.MetricsSourceAdapter 
> (MetricsSourceAdapter.java:getMetrics(202)) - Error getting metrics from 
> source DecayRpcSchedulerMetrics2.ipc.8020
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getMetrics(DecayRpcScheduler.java:781)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:199)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:182)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:155)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
> at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:66)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:100)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:268)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:233)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.registerMetrics2Source(DecayRpcScheduler.java:709)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.(DecayRpcScheduler.java:685)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getInstance(DecayRpcScheduler.java:693)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler.(DecayRpcScheduler.java:236)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:102)
> at 
> org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:76)
> at org.apache.hadoop.ipc.Server.(Server.java:2612)
> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:374)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:349)
> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:415)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:755)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:697)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:905)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:884)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1610)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1678)
> {code}
> It seems that {{metricsProxy}} in DecayRpcScheduler should initiate its 
> {{delegate}} field in its Initialization method



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15121) Encounter NullPointerException when using DecayRpcScheduler

2018-01-18 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16331733#comment-16331733
 ] 

Tao Jie commented on HADOOP-15121:
--

[~hanishakoneru], I tried to remove the redundant {{setDelegate}}, it failed 
the test {{TestDecayRpcScheduler#testPriority}}.
In this testcase, {{MetricsProxy}} instance was initialized in another test, 
when initializing {{DecayRpcScheduler}}, {{MetircsProxy}} was not acturally 
initialized, and delegate was empty.
So I think we'd better to remain explicit {{metricsProxy.setDelegate(this)}} 
here, in case of the weakReference delegate is reclaimed。

> Encounter NullPointerException when using DecayRpcScheduler
> ---
>
> Key: HADOOP-15121
> URL: https://issues.apache.org/jira/browse/HADOOP-15121
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.2
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Major
> Attachments: HADOOP-15121.001.patch, HADOOP-15121.002.patch, 
> HADOOP-15121.003.patch, HADOOP-15121.004.patch, HADOOP-15121.005.patch, 
> HADOOP-15121.006.patch
>
>
> I set ipc.8020.scheduler.impl to org.apache.hadoop.ipc.DecayRpcScheduler, but 
> got excetion in namenode:
> {code}
> 2017-12-15 15:26:34,662 ERROR impl.MetricsSourceAdapter 
> (MetricsSourceAdapter.java:getMetrics(202)) - Error getting metrics from 
> source DecayRpcSchedulerMetrics2.ipc.8020
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getMetrics(DecayRpcScheduler.java:781)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:199)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:182)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:155)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
> at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:66)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:100)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:268)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:233)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.registerMetrics2Source(DecayRpcScheduler.java:709)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.(DecayRpcScheduler.java:685)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getInstance(DecayRpcScheduler.java:693)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler.(DecayRpcScheduler.java:236)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:102)
> at 
> org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:76)
> at org.apache.hadoop.ipc.Server.(Server.java:2612)
> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:374)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:349)
> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:415)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:755)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:697)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:905)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:884)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1610)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1678)
> {code}
> It seems that {{metricsProxy}} in DecayRpcScheduler shou

[jira] [Commented] (HADOOP-15143) NPE due to Invalid KerberosTicket in UGI

2018-01-18 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16331721#comment-16331721
 ] 

Wei-Chiu Chuang commented on HADOOP-15143:
--

Hi [~jnp] your commit made it into branch-2, so it will be in 2.10.0 (if it's 
released)

just cherry-picked the commit to branch-2.9 for the 2.9.1 release.

Also cherry picked the trunk commit to branch-3.0 for the 3.0.1 release.

> NPE due to Invalid KerberosTicket in UGI
> 
>
> Key: HADOOP-15143
> URL: https://issues.apache.org/jira/browse/HADOOP-15143
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.4
>Reporter: Jitendra Nath Pandey
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1
>
> Attachments: HADOOP-15143-branch-2.001.patch, HADOOP-15143.001.patch, 
> HADOOP-15143.002.patch
>
>
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.security.UserGroupInformation.fixKerberosTicketOrder(UserGroupInformation.java:1170)
>  
>   at 
> org.apache.hadoop.security.UserGroupInformation.reloginFromKeytab(UserGroupInformation.java:1247)
>  
>   at 
> org.apache.hadoop.security.UserGroupInformation.checkTGTAndReloginFromKeytab(UserGroupInformation.java:1157)
>  
> {code}
> It could be related to jdk issue
> http://hg.openjdk.java.net/jdk8u/jdk8u-dev/jdk/rev/fd0e0898721c



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15143) NPE due to Invalid KerberosTicket in UGI

2018-01-18 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-15143:
-
Fix Version/s: 3.0.1
   2.10.0

> NPE due to Invalid KerberosTicket in UGI
> 
>
> Key: HADOOP-15143
> URL: https://issues.apache.org/jira/browse/HADOOP-15143
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.4
>Reporter: Jitendra Nath Pandey
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1
>
> Attachments: HADOOP-15143-branch-2.001.patch, HADOOP-15143.001.patch, 
> HADOOP-15143.002.patch
>
>
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.security.UserGroupInformation.fixKerberosTicketOrder(UserGroupInformation.java:1170)
>  
>   at 
> org.apache.hadoop.security.UserGroupInformation.reloginFromKeytab(UserGroupInformation.java:1247)
>  
>   at 
> org.apache.hadoop.security.UserGroupInformation.checkTGTAndReloginFromKeytab(UserGroupInformation.java:1157)
>  
> {code}
> It could be related to jdk issue
> http://hg.openjdk.java.net/jdk8u/jdk8u-dev/jdk/rev/fd0e0898721c



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15121) Encounter NullPointerException when using DecayRpcScheduler

2018-01-18 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16331684#comment-16331684
 ] 

genericqa commented on HADOOP-15121:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 24 unchanged - 1 fixed = 24 total (was 25) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 38s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
33s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 86m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ipc.TestDecayRpcScheduler |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15121 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906738/HADOOP-15121.006.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 34667015f3a5 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 37f4696 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13997/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13997/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13997/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 1718 (vs. ulimit of 5000) |
| modules | C: hadoop-common-p

[jira] [Commented] (HADOOP-15121) Encounter NullPointerException when using DecayRpcScheduler

2018-01-18 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16331590#comment-16331590
 ] 

Tao Jie commented on HADOOP-15121:
--

[~hanishakoneru] Thank you for you comments.

I improved this patch according to your suggestions.

> Encounter NullPointerException when using DecayRpcScheduler
> ---
>
> Key: HADOOP-15121
> URL: https://issues.apache.org/jira/browse/HADOOP-15121
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.2
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Major
> Attachments: HADOOP-15121.001.patch, HADOOP-15121.002.patch, 
> HADOOP-15121.003.patch, HADOOP-15121.004.patch, HADOOP-15121.005.patch, 
> HADOOP-15121.006.patch
>
>
> I set ipc.8020.scheduler.impl to org.apache.hadoop.ipc.DecayRpcScheduler, but 
> got excetion in namenode:
> {code}
> 2017-12-15 15:26:34,662 ERROR impl.MetricsSourceAdapter 
> (MetricsSourceAdapter.java:getMetrics(202)) - Error getting metrics from 
> source DecayRpcSchedulerMetrics2.ipc.8020
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getMetrics(DecayRpcScheduler.java:781)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:199)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:182)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:155)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
> at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:66)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:100)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:268)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:233)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.registerMetrics2Source(DecayRpcScheduler.java:709)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.(DecayRpcScheduler.java:685)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getInstance(DecayRpcScheduler.java:693)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler.(DecayRpcScheduler.java:236)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:102)
> at 
> org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:76)
> at org.apache.hadoop.ipc.Server.(Server.java:2612)
> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:374)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:349)
> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:415)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:755)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:697)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:905)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:884)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1610)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1678)
> {code}
> It seems that {{metricsProxy}} in DecayRpcScheduler should initiate its 
> {{delegate}} field in its Initialization method



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15121) Encounter NullPointerException when using DecayRpcScheduler

2018-01-18 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated HADOOP-15121:
-
Attachment: HADOOP-15121.006.patch

> Encounter NullPointerException when using DecayRpcScheduler
> ---
>
> Key: HADOOP-15121
> URL: https://issues.apache.org/jira/browse/HADOOP-15121
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.2
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Major
> Attachments: HADOOP-15121.001.patch, HADOOP-15121.002.patch, 
> HADOOP-15121.003.patch, HADOOP-15121.004.patch, HADOOP-15121.005.patch, 
> HADOOP-15121.006.patch
>
>
> I set ipc.8020.scheduler.impl to org.apache.hadoop.ipc.DecayRpcScheduler, but 
> got excetion in namenode:
> {code}
> 2017-12-15 15:26:34,662 ERROR impl.MetricsSourceAdapter 
> (MetricsSourceAdapter.java:getMetrics(202)) - Error getting metrics from 
> source DecayRpcSchedulerMetrics2.ipc.8020
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getMetrics(DecayRpcScheduler.java:781)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:199)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:182)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:155)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
> at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:66)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:100)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:268)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:233)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.registerMetrics2Source(DecayRpcScheduler.java:709)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.(DecayRpcScheduler.java:685)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getInstance(DecayRpcScheduler.java:693)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler.(DecayRpcScheduler.java:236)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:102)
> at 
> org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:76)
> at org.apache.hadoop.ipc.Server.(Server.java:2612)
> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:374)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:349)
> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:415)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:755)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:697)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:905)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:884)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1610)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1678)
> {code}
> It seems that {{metricsProxy}} in DecayRpcScheduler should initiate its 
> {{delegate}} field in its Initialization method



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15182) Support to change back to signature version 2 of AWS SDK

2018-01-18 Thread Yonger (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16331554#comment-16331554
 ] 

Yonger commented on HADOOP-15182:
-

You means  "fs.s3a.signing-algorithm"?, but I can't find an algorithm to match 
v2:
{code:java}
static {
 // Register the standard signer types.
 SIGNERS.put(QUERY_STRING_SIGNER, QueryStringSigner.class);
 SIGNERS.put(VERSION_THREE_SIGNER, AWS3Signer.class);
 SIGNERS.put(VERSION_FOUR_SIGNER, AWS4Signer.class);
 SIGNERS.put(VERSION_FOUR_UNSIGNED_PAYLOAD_SIGNER, 
AWS4UnsignedPayloadSigner.class);
 SIGNERS.put(NO_OP_SIGNER, NoOpSigner.class);
}{code}

> Support to change back to signature version 2 of AWS SDK
> 
>
> Key: HADOOP-15182
> URL: https://issues.apache.org/jira/browse/HADOOP-15182
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.9.0
> Environment:  
>  
>  
>Reporter: Yonger
>Priority: Minor
>
> Current s3a depend on  aws-java-sdk-bundle-1.11.199 which use signature v4. 
> So for some s3-compatible system(Ceph) which still using v2, Hadoop can't 
> work on them.
> s3cmd can use v2 via specify option like :
> {code:java}
> s3cmd --signature-v2 ls s3://xxx/{code}
>  
> maybe we can add a parameter to allow back to use signature v2 in s3a.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13508) FsPermission string constructor does not recognize sticky bit

2018-01-18 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HADOOP-13508:
-
Target Version/s: 3.0.0-alpha1, 2.9.0  (was: 2.9.0, 3.0.0-alpha1)
   Fix Version/s: 2.7.6

I just committed this to branch-2.7. Thanks.

> FsPermission string constructor does not recognize sticky bit
> -
>
> Key: HADOOP-13508
> URL: https://issues.apache.org/jira/browse/HADOOP-13508
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha2, 2.8.2, 2.7.6
>
> Attachments: HADOOP-13508-1.patch, HADOOP-13508-2.patch, 
> HADOOP-13508.003.patch, HADOOP-13508.004.patch, HADOOP-13508.005.patch, 
> HADOOP-13508.006.patch, HADOOP-13508.branch-2.patch
>
>
> FsPermissions's string constructor breaks on valid permission strings, like 
> "1777". 
> This is because FsPermission class naïvely uses UmaskParser to do it’s 
> parsing of permissions: (from source code):
> public FsPermission(String mode) {
> this((new UmaskParser(mode)).getUMask());
> }
> The mode string UMask accepts is subtly different (esp wrt sticky bit), so 
> parsing Umask is not the same as parsing FsPermission. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15178) Generalize NetUtils#wrapException to handle other subclasses with String Constructor.

2018-01-18 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16331548#comment-16331548
 ] 

Ajay Kumar commented on HADOOP-15178:
-

asflicense warning and unit test failures are unrelated.

> Generalize NetUtils#wrapException to handle other subclasses with String 
> Constructor.
> -
>
> Key: HADOOP-15178
> URL: https://issues.apache.org/jira/browse/HADOOP-15178
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HADOOP-15178.001.patch
>
>
> NetUtils#wrapException returns an IOException if exception passed to it is 
> not of type 
> SocketException,EOFException,NoRouteToHostException,SocketTimeoutException,UnknownHostException,ConnectException,BindException.
> By default, it  should always return instance (subclass of IOException) of 
> same type unless a String constructor is not available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13375) o.a.h.security.TestGroupsCaching.testBackgroundRefreshCounters seems flaky

2018-01-18 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HADOOP-13375:
-
Fix Version/s: 2.7.6

Just committed this to branch-2.7. Thanks.

> o.a.h.security.TestGroupsCaching.testBackgroundRefreshCounters seems flaky
> --
>
> Key: HADOOP-13375
> URL: https://issues.apache.org/jira/browse/HADOOP-13375
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security, test
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Weiwei Yang
>Priority: Major
> Fix For: 2.8.0, 3.0.0-alpha2, 2.7.6
>
> Attachments: HADOOP-13375.001.patch, HADOOP-13375.002.patch, 
> HADOOP-13375.003.patch, HADOOP-13375.004.patch, HADOOP-13375.005.patch, 
> HADOOP-13375.006.patch, HADOOP-13375.007.patch
>
>
> h5. Error Message
> bq. expected:<1> but was:<0>
> h5. Stacktrace
> {quote}
> java.lang.AssertionError: expected:<1> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.security.TestGroupsCaching.testBackgroundRefreshCounters(TestGroupsCaching.java:638)
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13263) Reload cached groups in background after expiry

2018-01-18 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HADOOP-13263:
-
Fix Version/s: 2.7.6

Just committed this to branch-2.7. Thanks.

> Reload cached groups in background after expiry
> ---
>
> Key: HADOOP-13263
> URL: https://issues.apache.org/jira/browse/HADOOP-13263
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 2.8.0, 3.0.0-alpha1, 2.7.6
>
> Attachments: HADOOP-13263.001.patch, HADOOP-13263.002.patch, 
> HADOOP-13263.003.patch, HADOOP-13263.004.patch, HADOOP-13263.005.patch, 
> HADOOP-13263.006.patch, HADOOP-13263.007.patch
>
>
> In HADOOP-11238 the Guava cache was introduced to allow refreshes on the 
> Namenode group cache to run in the background, avoiding many slow group 
> lookups. Even with this change, I have seen quite a few clusters with issues 
> due to slow group lookups. The problem is most prevalent in HA clusters, 
> where a slow group lookup on the hdfs user can fail to return for over 45 
> seconds causing the Failover Controller to kill it.
> The way the current Guava cache implementation works is approximately:
> 1) On initial load, the first thread to request groups for a given user 
> blocks until it returns. Any subsequent threads requesting that user block 
> until that first thread populates the cache.
> 2) When the key expires, the first thread to hit the cache after expiry 
> blocks. While it is blocked, other threads will return the old value.
> I feel it is this blocking thread that still gives the Namenode issues on 
> slow group lookups. If the call from the FC is the one that blocks and 
> lookups are slow, if can cause the NN to be killed.
> Guava has the ability to refresh expired keys completely in the background, 
> where the first thread that hits an expired key schedules a background cache 
> reload, but still returns the old value. Then the cache is eventually 
> updated. This patch introduces this background reload feature. There are two 
> new parameters:
> 1) hadoop.security.groups.cache.background.reload - default false to keep the 
> current behaviour. Set to true to enable a small thread pool and background 
> refresh for expired keys
> 2) hadoop.security.groups.cache.background.reload.threads - only relevant if 
> the above is set to true. Controls how many threads are in the background 
> refresh pool. Default is 1, which is likely to be enough.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12751) While using kerberos Hadoop incorrectly assumes names with '@' to be non-simple

2018-01-18 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HADOOP-12751:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.7.6
   Status: Resolved  (was: Patch Available)

Just committed this to branch-2.7. Thanks.

> While using kerberos Hadoop incorrectly assumes names with '@' to be 
> non-simple
> ---
>
> Key: HADOOP-12751
> URL: https://issues.apache.org/jira/browse/HADOOP-12751
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2
> Environment: kerberos
>Reporter: Bolke de Bruin
>Assignee: Bolke de Bruin
>Priority: Critical
>  Labels: kerberos
> Fix For: 2.7.6, 3.0.0-alpha1, 2.8.0
>
> Attachments: 0001-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0001-Remove-check-for-user-name-characters-and.patch, 
> 0002-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0003-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0004-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0005-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0006-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0007-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0007-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0008-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0008-HADOOP-12751-leave-user-validation-to-os.patch, HADOOP-12751-009.patch, 
> HADOOP-12751-branch-2.7.009.patch
>
>
> In the scenario of a trust between two directories, eg. FreeIPA (ipa.local) 
> and Active Directory (ad.local) users can be made available on the OS level 
> by something like sssd. The trusted users will be of the form 'user@ad.local' 
> while other users are will not contain the domain. Executing 'id -Gn 
> user@ad.local' will successfully return the groups the user belongs to if 
> configured correctly. 
> However, it is assumed by Hadoop that users of the format with '@' cannot be 
> correct. This code is in KerberosName.java and seems to be a validator if the 
> 'auth_to_local' rules are applied correctly.
> In my opinion this should be removed or changed to a different kind of check 
> or maybe logged as a warning while still proceeding, as the current behavior 
> limits integration possibilities with other standard tools.
> Workaround are difficult to apply (by having a rewrite by system tools to for 
> example user_ad_local) due to down stream consequences.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-12751) While using kerberos Hadoop incorrectly assumes names with '@' to be non-simple

2018-01-18 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16331429#comment-16331429
 ] 

Konstantin Shvachko edited comment on HADOOP-12751 at 1/18/18 11:42 PM:


Thanks for the review [~brahmareddy].
Yes Yetus is trying to download Oracle Java 8, which probably requires a login.
It should be OpenJDK7 for branch-2.7. Something that should have been fixed by 
HADOOP-14474?


was (Author: shv):
Thanks for the review [~brahmareddy].
Yes Yetus is trying to download Oracle Java 8, which probably requires a login.
It should be OpenJDK2 for branch-2.7. Something that should have been fixed by 
HADOOP-14474?

> While using kerberos Hadoop incorrectly assumes names with '@' to be 
> non-simple
> ---
>
> Key: HADOOP-12751
> URL: https://issues.apache.org/jira/browse/HADOOP-12751
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2
> Environment: kerberos
>Reporter: Bolke de Bruin
>Assignee: Bolke de Bruin
>Priority: Critical
>  Labels: kerberos
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: 0001-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0001-Remove-check-for-user-name-characters-and.patch, 
> 0002-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0003-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0004-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0005-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0006-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0007-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0007-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0008-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0008-HADOOP-12751-leave-user-validation-to-os.patch, HADOOP-12751-009.patch, 
> HADOOP-12751-branch-2.7.009.patch
>
>
> In the scenario of a trust between two directories, eg. FreeIPA (ipa.local) 
> and Active Directory (ad.local) users can be made available on the OS level 
> by something like sssd. The trusted users will be of the form 'user@ad.local' 
> while other users are will not contain the domain. Executing 'id -Gn 
> user@ad.local' will successfully return the groups the user belongs to if 
> configured correctly. 
> However, it is assumed by Hadoop that users of the format with '@' cannot be 
> correct. This code is in KerberosName.java and seems to be a validator if the 
> 'auth_to_local' rules are applied correctly.
> In my opinion this should be removed or changed to a different kind of check 
> or maybe logged as a warning while still proceeding, as the current behavior 
> limits integration possibilities with other standard tools.
> Workaround are difficult to apply (by having a rewrite by system tools to for 
> example user_ad_local) due to down stream consequences.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12751) While using kerberos Hadoop incorrectly assumes names with '@' to be non-simple

2018-01-18 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16331429#comment-16331429
 ] 

Konstantin Shvachko commented on HADOOP-12751:
--

Thanks for the review [~brahmareddy].
Yes Yetus is trying to download Oracle Java 8, which probably requires a login.
It should be OpenJDK2 for branch-2.7. Something that should have been fixed by 
HADOOP-14474?

> While using kerberos Hadoop incorrectly assumes names with '@' to be 
> non-simple
> ---
>
> Key: HADOOP-12751
> URL: https://issues.apache.org/jira/browse/HADOOP-12751
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2
> Environment: kerberos
>Reporter: Bolke de Bruin
>Assignee: Bolke de Bruin
>Priority: Critical
>  Labels: kerberos
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: 0001-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0001-Remove-check-for-user-name-characters-and.patch, 
> 0002-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0003-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0004-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0005-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0006-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0007-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0007-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0008-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0008-HADOOP-12751-leave-user-validation-to-os.patch, HADOOP-12751-009.patch, 
> HADOOP-12751-branch-2.7.009.patch
>
>
> In the scenario of a trust between two directories, eg. FreeIPA (ipa.local) 
> and Active Directory (ad.local) users can be made available on the OS level 
> by something like sssd. The trusted users will be of the form 'user@ad.local' 
> while other users are will not contain the domain. Executing 'id -Gn 
> user@ad.local' will successfully return the groups the user belongs to if 
> configured correctly. 
> However, it is assumed by Hadoop that users of the format with '@' cannot be 
> correct. This code is in KerberosName.java and seems to be a validator if the 
> 'auth_to_local' rules are applied correctly.
> In my opinion this should be removed or changed to a different kind of check 
> or maybe logged as a warning while still proceeding, as the current behavior 
> limits integration possibilities with other standard tools.
> Workaround are difficult to apply (by having a rewrite by system tools to for 
> example user_ad_local) due to down stream consequences.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15182) Support to change back to signature version 2 of AWS SDK

2018-01-18 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16331424#comment-16331424
 ] 

Steve Loughran commented on HADOOP-15182:
-

There is a switch for that already, isn’t there?

> Support to change back to signature version 2 of AWS SDK
> 
>
> Key: HADOOP-15182
> URL: https://issues.apache.org/jira/browse/HADOOP-15182
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.9.0
> Environment:  
>  
>  
>Reporter: Yonger
>Priority: Minor
>
> Current s3a depend on  aws-java-sdk-bundle-1.11.199 which use signature v4. 
> So for some s3-compatible system(Ceph) which still using v2, Hadoop can't 
> work on them.
> s3cmd can use v2 via specify option like :
> {code:java}
> s3cmd --signature-v2 ls s3://xxx/{code}
>  
> maybe we can add a parameter to allow back to use signature v2 in s3a.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15121) Encounter NullPointerException when using DecayRpcScheduler

2018-01-18 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16331381#comment-16331381
 ] 

Arpit Agarwal commented on HADOOP-15121:


Done.

> Encounter NullPointerException when using DecayRpcScheduler
> ---
>
> Key: HADOOP-15121
> URL: https://issues.apache.org/jira/browse/HADOOP-15121
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.2
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Major
> Attachments: HADOOP-15121.001.patch, HADOOP-15121.002.patch, 
> HADOOP-15121.003.patch, HADOOP-15121.004.patch, HADOOP-15121.005.patch
>
>
> I set ipc.8020.scheduler.impl to org.apache.hadoop.ipc.DecayRpcScheduler, but 
> got excetion in namenode:
> {code}
> 2017-12-15 15:26:34,662 ERROR impl.MetricsSourceAdapter 
> (MetricsSourceAdapter.java:getMetrics(202)) - Error getting metrics from 
> source DecayRpcSchedulerMetrics2.ipc.8020
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getMetrics(DecayRpcScheduler.java:781)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:199)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:182)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:155)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
> at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:66)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:100)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:268)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:233)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.registerMetrics2Source(DecayRpcScheduler.java:709)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.(DecayRpcScheduler.java:685)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getInstance(DecayRpcScheduler.java:693)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler.(DecayRpcScheduler.java:236)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:102)
> at 
> org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:76)
> at org.apache.hadoop.ipc.Server.(Server.java:2612)
> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:374)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:349)
> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:415)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:755)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:697)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:905)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:884)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1610)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1678)
> {code}
> It seems that {{metricsProxy}} in DecayRpcScheduler should initiate its 
> {{delegate}} field in its Initialization method



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15121) Encounter NullPointerException when using DecayRpcScheduler

2018-01-18 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reassigned HADOOP-15121:
--

Assignee: Tao Jie

> Encounter NullPointerException when using DecayRpcScheduler
> ---
>
> Key: HADOOP-15121
> URL: https://issues.apache.org/jira/browse/HADOOP-15121
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.2
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Major
> Attachments: HADOOP-15121.001.patch, HADOOP-15121.002.patch, 
> HADOOP-15121.003.patch, HADOOP-15121.004.patch, HADOOP-15121.005.patch
>
>
> I set ipc.8020.scheduler.impl to org.apache.hadoop.ipc.DecayRpcScheduler, but 
> got excetion in namenode:
> {code}
> 2017-12-15 15:26:34,662 ERROR impl.MetricsSourceAdapter 
> (MetricsSourceAdapter.java:getMetrics(202)) - Error getting metrics from 
> source DecayRpcSchedulerMetrics2.ipc.8020
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getMetrics(DecayRpcScheduler.java:781)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:199)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:182)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:155)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
> at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:66)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:100)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:268)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:233)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.registerMetrics2Source(DecayRpcScheduler.java:709)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.(DecayRpcScheduler.java:685)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getInstance(DecayRpcScheduler.java:693)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler.(DecayRpcScheduler.java:236)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:102)
> at 
> org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:76)
> at org.apache.hadoop.ipc.Server.(Server.java:2612)
> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:374)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:349)
> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:415)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:755)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:697)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:905)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:884)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1610)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1678)
> {code}
> It seems that {{metricsProxy}} in DecayRpcScheduler should initiate its 
> {{delegate}} field in its Initialization method



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15121) Encounter NullPointerException when using DecayRpcScheduler

2018-01-18 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16331377#comment-16331377
 ] 

Hanisha Koneru commented on HADOOP-15121:
-

[~arpitagarwal], can you please add [~Tao Jie] to the contributors list. Thanks.

> Encounter NullPointerException when using DecayRpcScheduler
> ---
>
> Key: HADOOP-15121
> URL: https://issues.apache.org/jira/browse/HADOOP-15121
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.2
>Reporter: Tao Jie
>Priority: Major
> Attachments: HADOOP-15121.001.patch, HADOOP-15121.002.patch, 
> HADOOP-15121.003.patch, HADOOP-15121.004.patch, HADOOP-15121.005.patch
>
>
> I set ipc.8020.scheduler.impl to org.apache.hadoop.ipc.DecayRpcScheduler, but 
> got excetion in namenode:
> {code}
> 2017-12-15 15:26:34,662 ERROR impl.MetricsSourceAdapter 
> (MetricsSourceAdapter.java:getMetrics(202)) - Error getting metrics from 
> source DecayRpcSchedulerMetrics2.ipc.8020
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getMetrics(DecayRpcScheduler.java:781)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:199)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:182)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:155)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
> at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:66)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:100)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:268)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:233)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.registerMetrics2Source(DecayRpcScheduler.java:709)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.(DecayRpcScheduler.java:685)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getInstance(DecayRpcScheduler.java:693)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler.(DecayRpcScheduler.java:236)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:102)
> at 
> org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:76)
> at org.apache.hadoop.ipc.Server.(Server.java:2612)
> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:374)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:349)
> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:415)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:755)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:697)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:905)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:884)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1610)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1678)
> {code}
> It seems that {{metricsProxy}} in DecayRpcScheduler should initiate its 
> {{delegate}} field in its Initialization method



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15121) Encounter NullPointerException when using DecayRpcScheduler

2018-01-18 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16331362#comment-16331362
 ] 

Hanisha Koneru commented on HADOOP-15121:
-

Thanks for the patch, [~Tao Jie]. 

I have a couple of minor comments. The patch LGTM otherwise.
 * The {{setDelegate()}} call here is redundant as you have already set it 
during MetricsProxy initialization.

{code:java}
metricsProxy = MetricsProxy.getInstance(ns, numLevels, this);
metricsProxy.setDelegate(this);{code}
 * If the 2s test case is timing out occasionally on local machine, then a 5s 
timeout might also fail on an under-powered VM. It is better to set a higher 
test case timeout than we would ever expect it to take (say 60s).

> Encounter NullPointerException when using DecayRpcScheduler
> ---
>
> Key: HADOOP-15121
> URL: https://issues.apache.org/jira/browse/HADOOP-15121
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.2
>Reporter: Tao Jie
>Priority: Major
> Attachments: HADOOP-15121.001.patch, HADOOP-15121.002.patch, 
> HADOOP-15121.003.patch, HADOOP-15121.004.patch, HADOOP-15121.005.patch
>
>
> I set ipc.8020.scheduler.impl to org.apache.hadoop.ipc.DecayRpcScheduler, but 
> got excetion in namenode:
> {code}
> 2017-12-15 15:26:34,662 ERROR impl.MetricsSourceAdapter 
> (MetricsSourceAdapter.java:getMetrics(202)) - Error getting metrics from 
> source DecayRpcSchedulerMetrics2.ipc.8020
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getMetrics(DecayRpcScheduler.java:781)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:199)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:182)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:155)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
> at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:66)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:100)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:268)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:233)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.registerMetrics2Source(DecayRpcScheduler.java:709)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.(DecayRpcScheduler.java:685)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getInstance(DecayRpcScheduler.java:693)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler.(DecayRpcScheduler.java:236)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:102)
> at 
> org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:76)
> at org.apache.hadoop.ipc.Server.(Server.java:2612)
> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:374)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:349)
> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:415)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:755)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:697)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:905)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:884)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1610)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1678)
> {code}
> It seems t

[jira] [Commented] (HADOOP-15114) Add closeStreams(...) to IOUtils

2018-01-18 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16331227#comment-16331227
 ] 

genericqa commented on HADOOP-15114:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 48s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
34s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15114 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906694/HADOOP-15114.addendum1.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7068b3feebef 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 06cceba |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13996/testReport/ |
| Max. process+thread count | 1445 (vs. ulimit of 5000) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13996/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add closeStreams(...) to IOUtils
> 
>
> Key: HADOOP-15114
> URL: https:

[jira] [Updated] (HADOOP-15114) Add closeStreams(...) to IOUtils

2018-01-18 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-15114:
---
Status: Patch Available  (was: Reopened)

Thanks for reporting this [~brahmareddy].

+1 for the addendum, pending Jenkins.

> Add closeStreams(...) to IOUtils
> 
>
> Key: HADOOP-15114
> URL: https://issues.apache.org/jira/browse/HADOOP-15114
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HADOOP-15114.001.patch, HADOOP-15114.002.patch, 
> HADOOP-15114.003.patch, HADOOP-15114.004.patch, HADOOP-15114.005.patch, 
> HADOOP-15114.addendum.patch, HADOOP-15114.addendum1.patch
>
>
> Add closeStreams(...) in IOUtils. Originally suggested by [Jason 
> Lowe|https://issues.apache.org/jira/browse/HDFS-12881?focusedCommentId=16288320&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16288320].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15114) Add closeStreams(...) to IOUtils

2018-01-18 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15114:

Attachment: (was: HADOOP-15114.addendum1.patch)

> Add closeStreams(...) to IOUtils
> 
>
> Key: HADOOP-15114
> URL: https://issues.apache.org/jira/browse/HADOOP-15114
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HADOOP-15114.001.patch, HADOOP-15114.002.patch, 
> HADOOP-15114.003.patch, HADOOP-15114.004.patch, HADOOP-15114.005.patch, 
> HADOOP-15114.addendum.patch, HADOOP-15114.addendum1.patch
>
>
> Add closeStreams(...) in IOUtils. Originally suggested by [Jason 
> Lowe|https://issues.apache.org/jira/browse/HDFS-12881?focusedCommentId=16288320&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16288320].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14788) Credentials readTokenStorageFile to stop wrapping IOEs in IOEs

2018-01-18 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16331069#comment-16331069
 ] 

genericqa commented on HADOOP-14788:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
8s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
52s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
33s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 85m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-14788 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906678/HADOOP-14788.007.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux af2bdc1e7490 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 06cceba |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13995/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13995/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 1356 (vs. ulimit of 5000) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13995/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Credentials readTokenStorageFile t

[jira] [Updated] (HADOOP-15114) Add closeStreams(...) to IOUtils

2018-01-18 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15114:

Attachment: HADOOP-15114.addendum1.patch

> Add closeStreams(...) to IOUtils
> 
>
> Key: HADOOP-15114
> URL: https://issues.apache.org/jira/browse/HADOOP-15114
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HADOOP-15114.001.patch, HADOOP-15114.002.patch, 
> HADOOP-15114.003.patch, HADOOP-15114.004.patch, HADOOP-15114.005.patch, 
> HADOOP-15114.addendum.patch, HADOOP-15114.addendum1.patch
>
>
> Add closeStreams(...) in IOUtils. Originally suggested by [Jason 
> Lowe|https://issues.apache.org/jira/browse/HDFS-12881?focusedCommentId=16288320&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16288320].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15114) Add closeStreams(...) to IOUtils

2018-01-18 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15114:

Attachment: HADOOP-15114.addendum1.patch

> Add closeStreams(...) to IOUtils
> 
>
> Key: HADOOP-15114
> URL: https://issues.apache.org/jira/browse/HADOOP-15114
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HADOOP-15114.001.patch, HADOOP-15114.002.patch, 
> HADOOP-15114.003.patch, HADOOP-15114.004.patch, HADOOP-15114.005.patch, 
> HADOOP-15114.addendum.patch, HADOOP-15114.addendum1.patch
>
>
> Add closeStreams(...) in IOUtils. Originally suggested by [Jason 
> Lowe|https://issues.apache.org/jira/browse/HDFS-12881?focusedCommentId=16288320&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16288320].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15114) Add closeStreams(...) to IOUtils

2018-01-18 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15114:

Attachment: (was: HADOOP-15114.addendum1.patch)

> Add closeStreams(...) to IOUtils
> 
>
> Key: HADOOP-15114
> URL: https://issues.apache.org/jira/browse/HADOOP-15114
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HADOOP-15114.001.patch, HADOOP-15114.002.patch, 
> HADOOP-15114.003.patch, HADOOP-15114.004.patch, HADOOP-15114.005.patch, 
> HADOOP-15114.addendum.patch, HADOOP-15114.addendum1.patch
>
>
> Add closeStreams(...) in IOUtils. Originally suggested by [Jason 
> Lowe|https://issues.apache.org/jira/browse/HDFS-12881?focusedCommentId=16288320&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16288320].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15114) Add closeStreams(...) to IOUtils

2018-01-18 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15114:

Attachment: HADOOP-15114.addendum1.patch

> Add closeStreams(...) to IOUtils
> 
>
> Key: HADOOP-15114
> URL: https://issues.apache.org/jira/browse/HADOOP-15114
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HADOOP-15114.001.patch, HADOOP-15114.002.patch, 
> HADOOP-15114.003.patch, HADOOP-15114.004.patch, HADOOP-15114.005.patch, 
> HADOOP-15114.addendum.patch, HADOOP-15114.addendum1.patch
>
>
> Add closeStreams(...) in IOUtils. Originally suggested by [Jason 
> Lowe|https://issues.apache.org/jira/browse/HDFS-12881?focusedCommentId=16288320&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16288320].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14757) S3AFileSystem.innerRename() to size metadatastore lists better

2018-01-18 Thread Abraham Fine (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abraham Fine reassigned HADOOP-14757:
-

Assignee: Abraham Fine

> S3AFileSystem.innerRename() to size metadatastore lists better
> --
>
> Key: HADOOP-14757
> URL: https://issues.apache.org/jira/browse/HADOOP-14757
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Abraham Fine
>Priority: Minor
> Fix For: 3.1.0
>
>
> In {{S3AFileSystem.innerRename()}}, various ArrayLists are created to track 
> paths to update; these are created with the default size. It could/should be 
> possible to allocate better, so avoid expensive array growth & copy 
> operations while iterating through the list of entries.
> # for a single file copy, sizes == 1
> # for a recursive copy, the outcome of the first real LIST will either 
> provide the actual size, or, if the list == the max response, a very large 
> minimum size.
> For #2, we'd need to get the hint of iterable length rather than just iterate 
> through...some interface {{{IterableLength.expectedMinimumSize()}} could do 
> that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14577) ITestS3AInconsistency.testGetFileStatus failing in -DS3guard test runs

2018-01-18 Thread Abraham Fine (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16331009#comment-16331009
 ] 

Abraham Fine commented on HADOOP-14577:
---

[~mackrorysd] [~ste...@apache.org] I have been unable to replicate this on the 
latest trunk. Is this still an issue?

> ITestS3AInconsistency.testGetFileStatus failing in -DS3guard test runs
> --
>
> Key: HADOOP-14577
> URL: https://issues.apache.org/jira/browse/HADOOP-14577
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.0.0-beta1
>Reporter: Sean Mackrory
>Assignee: Abraham Fine
>Priority: Minor
>
> This test is failing for me when run individually or in parallel (with 
> -Ds3guard). Even if I revert back to the commit that introduced it. I thought 
> I had successful test runs on that before and haven't changed anything in my 
> test configuration.
> {code}Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 4.671 
> sec <<< FAILURE! - in org.apache.hadoop.fs.s3a.ITestS3AInconsistency
> testGetFileStatus(org.apache.hadoop.fs.s3a.ITestS3AInconsistency)  Time 
> elapsed: 4.475 sec  <<< FAILURE!
> java.lang.AssertionError: S3Guard failed to list parent of inconsistent child.
> at org.junit.Assert.fail(Assert.java:88)
> at 
> org.apache.hadoop.fs.s3a.ITestS3AInconsistency.testGetFileStatus(ITestS3AInconsistency.java:83){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org