[jira] [Updated] (HADOOP-15887) Add an option to avoid writing data locally in Distcp

2020-01-01 Thread Tao Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated HADOOP-15887:
-
Attachment: HADOOP-15887.005.patch

> Add an option to avoid writing data locally in Distcp
> -
>
> Key: HADOOP-15887
> URL: https://issues.apache.org/jira/browse/HADOOP-15887
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.8.2, 3.0.0
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Major
> Attachments: HADOOP-15887.001.patch, HADOOP-15887.002.patch, 
> HADOOP-15887.003.patch, HADOOP-15887.004.patch, HADOOP-15887.005.patch
>
>
> When copying large amount of data from one cluster to another via Distcp, and 
> the Distcp jobs run in the target cluster, the datanode local usage would be 
> imbalanced. Because the default placement policy chooses the local node to 
> store the first replication.
> In https://issues.apache.org/jira/browse/HDFS-3702 we add a flag in DFSClient 
> to avoid replicating to the local datanode.  We can make use of this flag in 
> Distcp.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15887) Add an option to avoid writing data locally in Distcp

2019-12-31 Thread Tao Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated HADOOP-15887:
-
Attachment: HADOOP-15887.004.patch

> Add an option to avoid writing data locally in Distcp
> -
>
> Key: HADOOP-15887
> URL: https://issues.apache.org/jira/browse/HADOOP-15887
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.8.2, 3.0.0
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Major
> Attachments: HADOOP-15887.001.patch, HADOOP-15887.002.patch, 
> HADOOP-15887.003.patch, HADOOP-15887.004.patch
>
>
> When copying large amount of data from one cluster to another via Distcp, and 
> the Distcp jobs run in the target cluster, the datanode local usage would be 
> imbalanced. Because the default placement policy chooses the local node to 
> store the first replication.
> In https://issues.apache.org/jira/browse/HDFS-3702 we add a flag in DFSClient 
> to avoid replicating to the local datanode.  We can make use of this flag in 
> Distcp.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15887) Add an option to avoid writing data locally in Distcp

2019-12-31 Thread Tao Jie (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17006046#comment-17006046
 ] 

Tao Jie commented on HADOOP-15887:
--

[~ayushtkn] Thank you for your comment. I've rebased the patch to the trunk. 
For the exception logic in test code, I think it is acceptable and I prefer to 
keep the behavior the same as other testcases.

> Add an option to avoid writing data locally in Distcp
> -
>
> Key: HADOOP-15887
> URL: https://issues.apache.org/jira/browse/HADOOP-15887
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.8.2, 3.0.0
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Major
> Attachments: HADOOP-15887.001.patch, HADOOP-15887.002.patch, 
> HADOOP-15887.003.patch
>
>
> When copying large amount of data from one cluster to another via Distcp, and 
> the Distcp jobs run in the target cluster, the datanode local usage would be 
> imbalanced. Because the default placement policy chooses the local node to 
> store the first replication.
> In https://issues.apache.org/jira/browse/HDFS-3702 we add a flag in DFSClient 
> to avoid replicating to the local datanode.  We can make use of this flag in 
> Distcp.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15887) Add an option to avoid writing data locally in Distcp

2019-12-31 Thread Tao Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated HADOOP-15887:
-
Attachment: HADOOP-15887.003.patch

> Add an option to avoid writing data locally in Distcp
> -
>
> Key: HADOOP-15887
> URL: https://issues.apache.org/jira/browse/HADOOP-15887
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.8.2, 3.0.0
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Major
> Attachments: HADOOP-15887.001.patch, HADOOP-15887.002.patch, 
> HADOOP-15887.003.patch
>
>
> When copying large amount of data from one cluster to another via Distcp, and 
> the Distcp jobs run in the target cluster, the datanode local usage would be 
> imbalanced. Because the default placement policy chooses the local node to 
> store the first replication.
> In https://issues.apache.org/jira/browse/HDFS-3702 we add a flag in DFSClient 
> to avoid replicating to the local datanode.  We can make use of this flag in 
> Distcp.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15918) Namenode gets stuck when deleting large dir in trash

2018-11-14 Thread Tao Jie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated HADOOP-15918:
-
Attachment: HADOOP-15918.002.patch

> Namenode gets stuck when deleting large dir in trash
> 
>
> Key: HADOOP-15918
> URL: https://issues.apache.org/jira/browse/HADOOP-15918
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.2, 3.1.0
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Major
> Attachments: HADOOP-15918.001.patch, HADOOP-15918.002.patch, 
> HDFS-13769.001.patch, HDFS-13769.002.patch, HDFS-13769.003.patch, 
> HDFS-13769.004.patch
>
>
> Similar to the situation discussed in HDFS-13671, Namenode gets stuck for a 
> long time when deleting trash dir with a large mount of data. We found log in 
> namenode:
> {quote}
> 2018-06-08 20:00:59,042 INFO namenode.FSNamesystem 
> (FSNamesystemLock.java:writeUnlock(252)) - FSNamesystem write lock held for 
> 23018 ms via
> java.lang.Thread.getStackTrace(Thread.java:1552)
> org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:1033)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystemLock.writeUnlock(FSNamesystemLock.java:254)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.writeUnlock(FSNamesystem.java:1567)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:2820)
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:1047)
> {quote}
> One simple solution is to avoid deleting large data in one delete RPC call. 
> We implement a trashPolicy that divide the delete operation into several 
> delete RPCs, and each single deletion would not delete too many files.
> Any thought? [~linyiqun]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15918) Namenode gets stuck when deleting large dir in trash

2018-11-09 Thread Tao Jie (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16681108#comment-16681108
 ] 

Tao Jie commented on HADOOP-15918:
--

[~jojochuang] Thank you for your comments.
I updated the patch and moved this jira issue to HADOOP-COMMON.
{quote}
I am still not satisfied with FileSystem#contentSummary(). The closest I could 
find is FileSystem#getQuotaUsage() which would return number of objects in a 
directory. but quota is not enabled by default.
{quote}
It is true that {{getContentSummary()}} is a heavy method in namenode since it 
will compute usage of subdirs recursively while {{getQuotaUsage()}} just fetch 
usage information in  {{DirectoryWithQuotaFeature}} of inode. However in the 
implementation of {{getContentSummary()}}, it will also get namespace usage 
from {{DirectoryWithQuotaFeature}} in inode directly once its quota is set. So 
the performance of {{getContentSummary()}} is not worse than 
{{getQuotaUsage()}}  approximately.

> Namenode gets stuck when deleting large dir in trash
> 
>
> Key: HADOOP-15918
> URL: https://issues.apache.org/jira/browse/HADOOP-15918
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.2, 3.1.0
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Major
> Attachments: HADOOP-15918.001.patch, HDFS-13769.001.patch, 
> HDFS-13769.002.patch, HDFS-13769.003.patch, HDFS-13769.004.patch
>
>
> Similar to the situation discussed in HDFS-13671, Namenode gets stuck for a 
> long time when deleting trash dir with a large mount of data. We found log in 
> namenode:
> {quote}
> 2018-06-08 20:00:59,042 INFO namenode.FSNamesystem 
> (FSNamesystemLock.java:writeUnlock(252)) - FSNamesystem write lock held for 
> 23018 ms via
> java.lang.Thread.getStackTrace(Thread.java:1552)
> org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:1033)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystemLock.writeUnlock(FSNamesystemLock.java:254)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.writeUnlock(FSNamesystem.java:1567)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:2820)
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:1047)
> {quote}
> One simple solution is to avoid deleting large data in one delete RPC call. 
> We implement a trashPolicy that divide the delete operation into several 
> delete RPCs, and each single deletion would not delete too many files.
> Any thought? [~linyiqun]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15918) Namenode gets stuck when deleting large dir in trash

2018-11-09 Thread Tao Jie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated HADOOP-15918:
-
Attachment: HADOOP-15918.001.patch

> Namenode gets stuck when deleting large dir in trash
> 
>
> Key: HADOOP-15918
> URL: https://issues.apache.org/jira/browse/HADOOP-15918
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.2, 3.1.0
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Major
> Attachments: HADOOP-15918.001.patch, HDFS-13769.001.patch, 
> HDFS-13769.002.patch, HDFS-13769.003.patch, HDFS-13769.004.patch
>
>
> Similar to the situation discussed in HDFS-13671, Namenode gets stuck for a 
> long time when deleting trash dir with a large mount of data. We found log in 
> namenode:
> {quote}
> 2018-06-08 20:00:59,042 INFO namenode.FSNamesystem 
> (FSNamesystemLock.java:writeUnlock(252)) - FSNamesystem write lock held for 
> 23018 ms via
> java.lang.Thread.getStackTrace(Thread.java:1552)
> org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:1033)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystemLock.writeUnlock(FSNamesystemLock.java:254)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.writeUnlock(FSNamesystem.java:1567)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:2820)
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:1047)
> {quote}
> One simple solution is to avoid deleting large data in one delete RPC call. 
> We implement a trashPolicy that divide the delete operation into several 
> delete RPCs, and each single deletion would not delete too many files.
> Any thought? [~linyiqun]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-15918) Namenode gets stuck when deleting large dir in trash

2018-11-09 Thread Tao Jie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie moved HDFS-13769 to HADOOP-15918:
-

Affects Version/s: (was: 3.1.0)
   (was: 2.8.2)
   2.8.2
   3.1.0
  Key: HADOOP-15918  (was: HDFS-13769)
  Project: Hadoop Common  (was: Hadoop HDFS)

> Namenode gets stuck when deleting large dir in trash
> 
>
> Key: HADOOP-15918
> URL: https://issues.apache.org/jira/browse/HADOOP-15918
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.1.0, 2.8.2
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Major
> Attachments: HDFS-13769.001.patch, HDFS-13769.002.patch, 
> HDFS-13769.003.patch, HDFS-13769.004.patch
>
>
> Similar to the situation discussed in HDFS-13671, Namenode gets stuck for a 
> long time when deleting trash dir with a large mount of data. We found log in 
> namenode:
> {quote}
> 2018-06-08 20:00:59,042 INFO namenode.FSNamesystem 
> (FSNamesystemLock.java:writeUnlock(252)) - FSNamesystem write lock held for 
> 23018 ms via
> java.lang.Thread.getStackTrace(Thread.java:1552)
> org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:1033)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystemLock.writeUnlock(FSNamesystemLock.java:254)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.writeUnlock(FSNamesystem.java:1567)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:2820)
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:1047)
> {quote}
> One simple solution is to avoid deleting large data in one delete RPC call. 
> We implement a trashPolicy that divide the delete operation into several 
> delete RPCs, and each single deletion would not delete too many files.
> Any thought? [~linyiqun]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15887) Add an option to avoid writing data locally in Distcp

2018-11-01 Thread Tao Jie (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16672633#comment-16672633
 ] 

Tao Jie commented on HADOOP-15887:
--

Thank your [~ste...@apache.org] for your comments.

I updated the patch, would you give it a quick review?

> Add an option to avoid writing data locally in Distcp
> -
>
> Key: HADOOP-15887
> URL: https://issues.apache.org/jira/browse/HADOOP-15887
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.8.2, 3.0.0
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Major
> Attachments: HADOOP-15887.001.patch, HADOOP-15887.002.patch
>
>
> When copying large amount of data from one cluster to another via Distcp, and 
> the Distcp jobs run in the target cluster, the datanode local usage would be 
> imbalanced. Because the default placement policy chooses the local node to 
> store the first replication.
> In https://issues.apache.org/jira/browse/HDFS-3702 we add a flag in DFSClient 
> to avoid replicating to the local datanode.  We can make use of this flag in 
> Distcp.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15887) Add an option to avoid writing data locally in Distcp

2018-11-01 Thread Tao Jie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated HADOOP-15887:
-
Attachment: HADOOP-15887.002.patch

> Add an option to avoid writing data locally in Distcp
> -
>
> Key: HADOOP-15887
> URL: https://issues.apache.org/jira/browse/HADOOP-15887
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.8.2, 3.0.0
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Major
> Attachments: HADOOP-15887.001.patch, HADOOP-15887.002.patch
>
>
> When copying large amount of data from one cluster to another via Distcp, and 
> the Distcp jobs run in the target cluster, the datanode local usage would be 
> imbalanced. Because the default placement policy chooses the local node to 
> store the first replication.
> In https://issues.apache.org/jira/browse/HDFS-3702 we add a flag in DFSClient 
> to avoid replicating to the local datanode.  We can make use of this flag in 
> Distcp.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15887) Add an option to avoid writing data locally in Distcp

2018-10-29 Thread Tao Jie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated HADOOP-15887:
-
Attachment: HADOOP-15887.001.patch

> Add an option to avoid writing data locally in Distcp
> -
>
> Key: HADOOP-15887
> URL: https://issues.apache.org/jira/browse/HADOOP-15887
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.2, 3.0.0
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Major
> Attachments: HADOOP-15887.001.patch
>
>
> When copying large amount of data from one cluster to another via Distcp, and 
> the Distcp jobs run in the target cluster, the datanode local usage would be 
> imbalanced. Because the default placement policy chooses the local node to 
> store the first replication.
> In https://issues.apache.org/jira/browse/HDFS-3702 we add a flag in DFSClient 
> to avoid replicating to the local datanode.  We can make use of this flag in 
> Distcp.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15887) Add an option to avoid writing data locally in Distcp

2018-10-29 Thread Tao Jie (JIRA)
Tao Jie created HADOOP-15887:


 Summary: Add an option to avoid writing data locally in Distcp
 Key: HADOOP-15887
 URL: https://issues.apache.org/jira/browse/HADOOP-15887
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0, 2.8.2
Reporter: Tao Jie
Assignee: Tao Jie


When copying large amount of data from one cluster to another via Distcp, and 
the Distcp jobs run in the target cluster, the datanode local usage would be 
imbalanced. Because the default placement policy chooses the local node to 
store the first replication.

In https://issues.apache.org/jira/browse/HDFS-3702 we add a flag in DFSClient 
to avoid replicating to the local datanode.  We can make use of this flag in 
Distcp.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15419) Should not obtain delegationTokens from all namenodes when using ViewFS

2018-04-27 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16457290#comment-16457290
 ] 

Tao Jie commented on HADOOP-15419:
--

[~xkrogen] thank you for your comment. I understand your concerns. Actually I 
don't want to break any existing logic or assumptions for viewfs. We just want 
add another option for the user which somehow like a {{hint}} to viewfs which 
could help to reduce request to the cluster.  

> Should not obtain delegationTokens from all namenodes when using ViewFS
> ---
>
> Key: HADOOP-15419
> URL: https://issues.apache.org/jira/browse/HADOOP-15419
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.2
>Reporter: Tao Jie
>Priority: Major
>
> Today when submit a job to a viewfs cluster, the client will try to obtain 
> delegation token from all namenodes under the viewfs while only one namespace 
> is actually used in this job. It would create many unnecessary rpc call to 
> the whole cluster.
> In viewfs situation, we can just obtain delegation token from specific 
> namenode rather than all namenodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15419) Should not obtain delegationTokens from all namenodes when using ViewFS

2018-04-26 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456005#comment-16456005
 ] 

Tao Jie commented on HADOOP-15419:
--

[~hexiaoqiao] thank you for your comment.
{quote}
In fact, some request path may be found after task running (for one simple 
instance: hard code path in MapReduce/Spark job), if we don't obtain delegation 
token for that NameNode, the Job will be fail due to all tasks can not pass 
authentication.
{quote}
Agree. For the cluster, it is not easy to know which namenodes a certain job 
would access. I think the mechanism could be more flexible and obtaining tokens 
from all namenodes seems to be too crude.
1, We can have a option maybe {{fs.viewfs.use.specific.filesystem}}, only when 
this option is true, the following logic works.
2, When submit a mr/spark job, if the input/output path is a viewfs path, 
instead of obtaining token from all namenode, we would visit and fetch token 
from only a SET of filesystem.
3, The raw filesystem of the input/output path should be in the SET
4, We may have a global option like {{fs.viewfs.global.filesystem}} which 
defines filesystems that all jobs may visit(Eg. the filesystem of tmp dir, 
scratch dir), and it should be added into the SET
5, Job-level option like {{fs.viewfs.additional.filesystem}} which defiles 
extra filesystem that the certain job need.
Since obtaining delegation tokens happens on the client side, the effect of the 
modification would be controllable.  
Any thought?

> Should not obtain delegationTokens from all namenodes when using ViewFS
> ---
>
> Key: HADOOP-15419
> URL: https://issues.apache.org/jira/browse/HADOOP-15419
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.2
>Reporter: Tao Jie
>Priority: Major
>
> Today when submit a job to a viewfs cluster, the client will try to obtain 
> delegation token from all namenodes under the viewfs while only one namespace 
> is actually used in this job. It would create many unnecessary rpc call to 
> the whole cluster.
> In viewfs situation, we can just obtain delegation token from specific 
> namenode rather than all namenodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15419) Should not obtain delegationTokens from all namenodes when using ViewFS

2018-04-26 Thread Tao Jie (JIRA)
Tao Jie created HADOOP-15419:


 Summary: Should not obtain delegationTokens from all namenodes 
when using ViewFS
 Key: HADOOP-15419
 URL: https://issues.apache.org/jira/browse/HADOOP-15419
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.2
Reporter: Tao Jie


Today when submit a job to a viewfs cluster, the client will try to obtain 
delegation token from all namenodes under the viewfs while only one namespace 
is actually used in this job. It would create many unnecessary rpc call to the 
whole cluster.
In viewfs situation, we can just obtain delegation token from specific namenode 
rather than all namenodes.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15253) Should update maxQueueSize when refresh call queue

2018-04-17 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16441759#comment-16441759
 ] 

Tao Jie commented on HADOOP-15253:
--

Hi [~shv], thank you for committing this patch. Actually we are using 2.8.2, we 
don't mind if branch-2.7 has this patch.
If anyone else need this patch in branch-2.7, we could merge it later. I prefer 
closing this JIRA.

> Should update maxQueueSize when refresh call queue
> --
>
> Key: HADOOP-15253
> URL: https://issues.apache.org/jira/browse/HADOOP-15253
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.2
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Minor
> Attachments: HADOOP-15253.001.patch, HADOOP-15253.002.patch, 
> HADOOP-15253.003.patch, HADOOP-15253.004.patch
>
>
> When calling {{dfsadmin -refreshCallQueue}} to update CallQueue instance, 
> {{maxQueueSize}} should also be updated.
> In case of changing CallQueue instance to FairCallQueue, the length of each 
> queue in FairCallQueue would be 1/priorityLevels of original length of 
> DefaultCallQueue. So it would be helpful for us to set the length of 
> callqueue to a proper value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15253) Should update maxQueueSize when refresh call queue

2018-03-20 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16407345#comment-16407345
 ] 

Tao Jie commented on HADOOP-15253:
--

Thank you [~xyao] for your comment. I improved the hard code in patch 004.

> Should update maxQueueSize when refresh call queue
> --
>
> Key: HADOOP-15253
> URL: https://issues.apache.org/jira/browse/HADOOP-15253
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.2
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Minor
> Attachments: HADOOP-15253.001.patch, HADOOP-15253.002.patch, 
> HADOOP-15253.003.patch, HADOOP-15253.004.patch
>
>
> When calling {{dfsadmin -refreshCallQueue}} to update CallQueue instance, 
> {{maxQueueSize}} should also be updated.
> In case of changing CallQueue instance to FairCallQueue, the length of each 
> queue in FairCallQueue would be 1/priorityLevels of original length of 
> DefaultCallQueue. So it would be helpful for us to set the length of 
> callqueue to a proper value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15253) Should update maxQueueSize when refresh call queue

2018-03-20 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated HADOOP-15253:
-
Attachment: HADOOP-15253.004.patch

> Should update maxQueueSize when refresh call queue
> --
>
> Key: HADOOP-15253
> URL: https://issues.apache.org/jira/browse/HADOOP-15253
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.2
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Minor
> Attachments: HADOOP-15253.001.patch, HADOOP-15253.002.patch, 
> HADOOP-15253.003.patch, HADOOP-15253.004.patch
>
>
> When calling {{dfsadmin -refreshCallQueue}} to update CallQueue instance, 
> {{maxQueueSize}} should also be updated.
> In case of changing CallQueue instance to FairCallQueue, the length of each 
> queue in FairCallQueue would be 1/priorityLevels of original length of 
> DefaultCallQueue. So it would be helpful for us to set the length of 
> callqueue to a proper value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15253) Should update maxQueueSize when refresh call queue

2018-03-20 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16406279#comment-16406279
 ] 

Tao Jie commented on HADOOP-15253:
--

[~shv], thank you for your comment! I updated the unit test code as your 
suggestion.

> Should update maxQueueSize when refresh call queue
> --
>
> Key: HADOOP-15253
> URL: https://issues.apache.org/jira/browse/HADOOP-15253
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.2
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Minor
> Attachments: HADOOP-15253.001.patch, HADOOP-15253.002.patch, 
> HADOOP-15253.003.patch
>
>
> When calling {{dfsadmin -refreshCallQueue}} to update CallQueue instance, 
> {{maxQueueSize}} should also be updated.
> In case of changing CallQueue instance to FairCallQueue, the length of each 
> queue in FairCallQueue would be 1/priorityLevels of original length of 
> DefaultCallQueue. So it would be helpful for us to set the length of 
> callqueue to a proper value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15253) Should update maxQueueSize when refresh call queue

2018-03-20 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated HADOOP-15253:
-
Attachment: HADOOP-15253.003.patch

> Should update maxQueueSize when refresh call queue
> --
>
> Key: HADOOP-15253
> URL: https://issues.apache.org/jira/browse/HADOOP-15253
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.2
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Minor
> Attachments: HADOOP-15253.001.patch, HADOOP-15253.002.patch, 
> HADOOP-15253.003.patch
>
>
> When calling {{dfsadmin -refreshCallQueue}} to update CallQueue instance, 
> {{maxQueueSize}} should also be updated.
> In case of changing CallQueue instance to FairCallQueue, the length of each 
> queue in FairCallQueue would be 1/priorityLevels of original length of 
> DefaultCallQueue. So it would be helpful for us to set the length of 
> callqueue to a proper value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15253) Should update maxQueueSize when refresh call queue

2018-02-26 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16377911#comment-16377911
 ] 

Tao Jie commented on HADOOP-15253:
--

Added test case for this patch. [~shv] [~xyao] Would you give it a quick review?

> Should update maxQueueSize when refresh call queue
> --
>
> Key: HADOOP-15253
> URL: https://issues.apache.org/jira/browse/HADOOP-15253
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.2
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Minor
> Attachments: HADOOP-15253.001.patch, HADOOP-15253.002.patch
>
>
> When calling {{dfsadmin -refreshCallQueue}} to update CallQueue instance, 
> {{maxQueueSize}} should also be updated.
> In case of changing CallQueue instance to FairCallQueue, the length of each 
> queue in FairCallQueue would be 1/priorityLevels of original length of 
> DefaultCallQueue. So it would be helpful for us to set the length of 
> callqueue to a proper value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15253) Should update maxQueueSize when refresh call queue

2018-02-26 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated HADOOP-15253:
-
Attachment: HADOOP-15253.002.patch

> Should update maxQueueSize when refresh call queue
> --
>
> Key: HADOOP-15253
> URL: https://issues.apache.org/jira/browse/HADOOP-15253
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.2
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Minor
> Attachments: HADOOP-15253.001.patch, HADOOP-15253.002.patch
>
>
> When calling {{dfsadmin -refreshCallQueue}} to update CallQueue instance, 
> {{maxQueueSize}} should also be updated.
> In case of changing CallQueue instance to FairCallQueue, the length of each 
> queue in FairCallQueue would be 1/priorityLevels of original length of 
> DefaultCallQueue. So it would be helpful for us to set the length of 
> callqueue to a proper value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15253) Should update maxQueueSize when refresh call queue

2018-02-26 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated HADOOP-15253:
-
Affects Version/s: 2.8.2
   Status: Patch Available  (was: Open)

> Should update maxQueueSize when refresh call queue
> --
>
> Key: HADOOP-15253
> URL: https://issues.apache.org/jira/browse/HADOOP-15253
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.2
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Minor
> Attachments: HADOOP-15253.001.patch, HADOOP-15253.002.patch
>
>
> When calling {{dfsadmin -refreshCallQueue}} to update CallQueue instance, 
> {{maxQueueSize}} should also be updated.
> In case of changing CallQueue instance to FairCallQueue, the length of each 
> queue in FairCallQueue would be 1/priorityLevels of original length of 
> DefaultCallQueue. So it would be helpful for us to set the length of 
> callqueue to a proper value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15253) Should update maxQueueSize when refresh call queue

2018-02-22 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated HADOOP-15253:
-
Attachment: HADOOP-15253.001.patch

> Should update maxQueueSize when refresh call queue
> --
>
> Key: HADOOP-15253
> URL: https://issues.apache.org/jira/browse/HADOOP-15253
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Minor
> Attachments: HADOOP-15253.001.patch
>
>
> When calling {{dfsadmin -refreshCallQueue}} to update CallQueue instance, 
> {{maxQueueSize}} should also be updated.
> In case of changing CallQueue instance to FairCallQueue, the length of each 
> queue in FairCallQueue would be 1/priorityLevels of original length of 
> DefaultCallQueue. So it would be helpful for us to set the length of 
> callqueue to a proper value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15253) Should update maxQueueSize when refresh call queue

2018-02-22 Thread Tao Jie (JIRA)
Tao Jie created HADOOP-15253:


 Summary: Should update maxQueueSize when refresh call queue
 Key: HADOOP-15253
 URL: https://issues.apache.org/jira/browse/HADOOP-15253
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Tao Jie
Assignee: Tao Jie


When calling {{dfsadmin -refreshCallQueue}} to update CallQueue instance, 
{{maxQueueSize}} should also be updated.
In case of changing CallQueue instance to FairCallQueue, the length of each 
queue in FairCallQueue would be 1/priorityLevels of original length of 
DefaultCallQueue. So it would be helpful for us to set the length of callqueue 
to a proper value.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15194) Some properties for FairCallQueue should be reloadable

2018-01-26 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated HADOOP-15194:
-
Affects Version/s: 2.8.2

> Some properties for FairCallQueue should be reloadable
> --
>
> Key: HADOOP-15194
> URL: https://issues.apache.org/jira/browse/HADOOP-15194
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.2
>Reporter: Tao Jie
>Priority: Major
>
> When trying to make use of FairCallQueue for a large cluster, it is not easy 
> to set the properties appropriately at the first time. User may adjust those 
> properties frequently and restarting the namenode costs a lot for a large 
> multi-user hadoop clluster. We should support to reload some of those 
> properties in configuration without restarting.
> Related properties:
> ipc.8020.faircallqueue.multiplexer.weights
> ipc.8020.faircallqueue.decay-scheduler.thresholds
> ipc.8020.decay-scheduler.backoff.responsetime.thresholds



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15194) Some properties for FairCallQueue should be reloadable

2018-01-26 Thread Tao Jie (JIRA)
Tao Jie created HADOOP-15194:


 Summary: Some properties for FairCallQueue should be reloadable
 Key: HADOOP-15194
 URL: https://issues.apache.org/jira/browse/HADOOP-15194
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Tao Jie


When trying to make use of FairCallQueue for a large cluster, it is not easy to 
set the properties appropriately at the first time. User may adjust those 
properties frequently and restarting the namenode costs a lot for a large 
multi-user hadoop clluster. We should support to reload some of those 
properties in configuration without restarting.
Related properties:
ipc.8020.faircallqueue.multiplexer.weights
ipc.8020.faircallqueue.decay-scheduler.thresholds
ipc.8020.decay-scheduler.backoff.responsetime.thresholds




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15121) Encounter NullPointerException when using DecayRpcScheduler

2018-01-21 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated HADOOP-15121:
-
Attachment: HADOOP-15121.008.patch

> Encounter NullPointerException when using DecayRpcScheduler
> ---
>
> Key: HADOOP-15121
> URL: https://issues.apache.org/jira/browse/HADOOP-15121
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.2
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Major
> Attachments: HADOOP-15121.001.patch, HADOOP-15121.002.patch, 
> HADOOP-15121.003.patch, HADOOP-15121.004.patch, HADOOP-15121.005.patch, 
> HADOOP-15121.006.patch, HADOOP-15121.007.patch, HADOOP-15121.008.patch
>
>
> I set ipc.8020.scheduler.impl to org.apache.hadoop.ipc.DecayRpcScheduler, but 
> got excetion in namenode:
> {code}
> 2017-12-15 15:26:34,662 ERROR impl.MetricsSourceAdapter 
> (MetricsSourceAdapter.java:getMetrics(202)) - Error getting metrics from 
> source DecayRpcSchedulerMetrics2.ipc.8020
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getMetrics(DecayRpcScheduler.java:781)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:199)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:182)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:155)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
> at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:66)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:100)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:268)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:233)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.registerMetrics2Source(DecayRpcScheduler.java:709)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.(DecayRpcScheduler.java:685)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getInstance(DecayRpcScheduler.java:693)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler.(DecayRpcScheduler.java:236)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:102)
> at 
> org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:76)
> at org.apache.hadoop.ipc.Server.(Server.java:2612)
> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:374)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:349)
> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:415)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:755)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:697)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:905)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:884)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1610)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1678)
> {code}
> It seems that {{metricsProxy}} in DecayRpcScheduler should initiate its 
> {{delegate}} field in its Initialization method



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15121) Encounter NullPointerException when using DecayRpcScheduler

2018-01-21 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16333929#comment-16333929
 ] 

Tao Jie commented on HADOOP-15121:
--

Thank you [~hanishakoneru]. All testcase is OK now.

> Encounter NullPointerException when using DecayRpcScheduler
> ---
>
> Key: HADOOP-15121
> URL: https://issues.apache.org/jira/browse/HADOOP-15121
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.2
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Major
> Attachments: HADOOP-15121.001.patch, HADOOP-15121.002.patch, 
> HADOOP-15121.003.patch, HADOOP-15121.004.patch, HADOOP-15121.005.patch, 
> HADOOP-15121.006.patch, HADOOP-15121.007.patch, HADOOP-15121.008.patch
>
>
> I set ipc.8020.scheduler.impl to org.apache.hadoop.ipc.DecayRpcScheduler, but 
> got excetion in namenode:
> {code}
> 2017-12-15 15:26:34,662 ERROR impl.MetricsSourceAdapter 
> (MetricsSourceAdapter.java:getMetrics(202)) - Error getting metrics from 
> source DecayRpcSchedulerMetrics2.ipc.8020
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getMetrics(DecayRpcScheduler.java:781)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:199)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:182)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:155)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
> at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:66)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:100)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:268)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:233)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.registerMetrics2Source(DecayRpcScheduler.java:709)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.(DecayRpcScheduler.java:685)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getInstance(DecayRpcScheduler.java:693)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler.(DecayRpcScheduler.java:236)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:102)
> at 
> org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:76)
> at org.apache.hadoop.ipc.Server.(Server.java:2612)
> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:374)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:349)
> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:415)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:755)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:697)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:905)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:884)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1610)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1678)
> {code}
> It seems that {{metricsProxy}} in DecayRpcScheduler should initiate its 
> {{delegate}} field in its Initialization method



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15121) Encounter NullPointerException when using DecayRpcScheduler

2018-01-18 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated HADOOP-15121:
-
Attachment: HADOOP-15121.007.patch

> Encounter NullPointerException when using DecayRpcScheduler
> ---
>
> Key: HADOOP-15121
> URL: https://issues.apache.org/jira/browse/HADOOP-15121
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.2
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Major
> Attachments: HADOOP-15121.001.patch, HADOOP-15121.002.patch, 
> HADOOP-15121.003.patch, HADOOP-15121.004.patch, HADOOP-15121.005.patch, 
> HADOOP-15121.006.patch, HADOOP-15121.007.patch
>
>
> I set ipc.8020.scheduler.impl to org.apache.hadoop.ipc.DecayRpcScheduler, but 
> got excetion in namenode:
> {code}
> 2017-12-15 15:26:34,662 ERROR impl.MetricsSourceAdapter 
> (MetricsSourceAdapter.java:getMetrics(202)) - Error getting metrics from 
> source DecayRpcSchedulerMetrics2.ipc.8020
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getMetrics(DecayRpcScheduler.java:781)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:199)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:182)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:155)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
> at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:66)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:100)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:268)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:233)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.registerMetrics2Source(DecayRpcScheduler.java:709)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.(DecayRpcScheduler.java:685)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getInstance(DecayRpcScheduler.java:693)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler.(DecayRpcScheduler.java:236)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:102)
> at 
> org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:76)
> at org.apache.hadoop.ipc.Server.(Server.java:2612)
> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:374)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:349)
> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:415)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:755)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:697)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:905)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:884)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1610)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1678)
> {code}
> It seems that {{metricsProxy}} in DecayRpcScheduler should initiate its 
> {{delegate}} field in its Initialization method



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15121) Encounter NullPointerException when using DecayRpcScheduler

2018-01-18 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16331733#comment-16331733
 ] 

Tao Jie commented on HADOOP-15121:
--

[~hanishakoneru], I tried to remove the redundant {{setDelegate}}, it failed 
the test {{TestDecayRpcScheduler#testPriority}}.
In this testcase, {{MetricsProxy}} instance was initialized in another test, 
when initializing {{DecayRpcScheduler}}, {{MetircsProxy}} was not acturally 
initialized, and delegate was empty.
So I think we'd better to remain explicit {{metricsProxy.setDelegate(this)}} 
here, in case of the weakReference delegate is reclaimed。

> Encounter NullPointerException when using DecayRpcScheduler
> ---
>
> Key: HADOOP-15121
> URL: https://issues.apache.org/jira/browse/HADOOP-15121
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.2
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Major
> Attachments: HADOOP-15121.001.patch, HADOOP-15121.002.patch, 
> HADOOP-15121.003.patch, HADOOP-15121.004.patch, HADOOP-15121.005.patch, 
> HADOOP-15121.006.patch
>
>
> I set ipc.8020.scheduler.impl to org.apache.hadoop.ipc.DecayRpcScheduler, but 
> got excetion in namenode:
> {code}
> 2017-12-15 15:26:34,662 ERROR impl.MetricsSourceAdapter 
> (MetricsSourceAdapter.java:getMetrics(202)) - Error getting metrics from 
> source DecayRpcSchedulerMetrics2.ipc.8020
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getMetrics(DecayRpcScheduler.java:781)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:199)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:182)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:155)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
> at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:66)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:100)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:268)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:233)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.registerMetrics2Source(DecayRpcScheduler.java:709)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.(DecayRpcScheduler.java:685)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getInstance(DecayRpcScheduler.java:693)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler.(DecayRpcScheduler.java:236)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:102)
> at 
> org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:76)
> at org.apache.hadoop.ipc.Server.(Server.java:2612)
> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:374)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:349)
> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:415)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:755)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:697)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:905)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:884)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1610)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1678)
> {code}
> It seems that {{metricsProxy}} in DecayRpcScheduler shou

[jira] [Commented] (HADOOP-15121) Encounter NullPointerException when using DecayRpcScheduler

2018-01-18 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16331590#comment-16331590
 ] 

Tao Jie commented on HADOOP-15121:
--

[~hanishakoneru] Thank you for you comments.

I improved this patch according to your suggestions.

> Encounter NullPointerException when using DecayRpcScheduler
> ---
>
> Key: HADOOP-15121
> URL: https://issues.apache.org/jira/browse/HADOOP-15121
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.2
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Major
> Attachments: HADOOP-15121.001.patch, HADOOP-15121.002.patch, 
> HADOOP-15121.003.patch, HADOOP-15121.004.patch, HADOOP-15121.005.patch, 
> HADOOP-15121.006.patch
>
>
> I set ipc.8020.scheduler.impl to org.apache.hadoop.ipc.DecayRpcScheduler, but 
> got excetion in namenode:
> {code}
> 2017-12-15 15:26:34,662 ERROR impl.MetricsSourceAdapter 
> (MetricsSourceAdapter.java:getMetrics(202)) - Error getting metrics from 
> source DecayRpcSchedulerMetrics2.ipc.8020
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getMetrics(DecayRpcScheduler.java:781)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:199)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:182)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:155)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
> at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:66)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:100)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:268)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:233)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.registerMetrics2Source(DecayRpcScheduler.java:709)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.(DecayRpcScheduler.java:685)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getInstance(DecayRpcScheduler.java:693)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler.(DecayRpcScheduler.java:236)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:102)
> at 
> org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:76)
> at org.apache.hadoop.ipc.Server.(Server.java:2612)
> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:374)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:349)
> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:415)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:755)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:697)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:905)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:884)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1610)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1678)
> {code}
> It seems that {{metricsProxy}} in DecayRpcScheduler should initiate its 
> {{delegate}} field in its Initialization method



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15121) Encounter NullPointerException when using DecayRpcScheduler

2018-01-18 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated HADOOP-15121:
-
Attachment: HADOOP-15121.006.patch

> Encounter NullPointerException when using DecayRpcScheduler
> ---
>
> Key: HADOOP-15121
> URL: https://issues.apache.org/jira/browse/HADOOP-15121
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.2
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Major
> Attachments: HADOOP-15121.001.patch, HADOOP-15121.002.patch, 
> HADOOP-15121.003.patch, HADOOP-15121.004.patch, HADOOP-15121.005.patch, 
> HADOOP-15121.006.patch
>
>
> I set ipc.8020.scheduler.impl to org.apache.hadoop.ipc.DecayRpcScheduler, but 
> got excetion in namenode:
> {code}
> 2017-12-15 15:26:34,662 ERROR impl.MetricsSourceAdapter 
> (MetricsSourceAdapter.java:getMetrics(202)) - Error getting metrics from 
> source DecayRpcSchedulerMetrics2.ipc.8020
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getMetrics(DecayRpcScheduler.java:781)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:199)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:182)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:155)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
> at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:66)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:100)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:268)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:233)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.registerMetrics2Source(DecayRpcScheduler.java:709)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.(DecayRpcScheduler.java:685)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getInstance(DecayRpcScheduler.java:693)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler.(DecayRpcScheduler.java:236)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:102)
> at 
> org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:76)
> at org.apache.hadoop.ipc.Server.(Server.java:2612)
> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:374)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:349)
> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:415)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:755)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:697)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:905)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:884)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1610)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1678)
> {code}
> It seems that {{metricsProxy}} in DecayRpcScheduler should initiate its 
> {{delegate}} field in its Initialization method



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15121) Encounter NullPointerException when using DecayRpcScheduler

2018-01-12 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated HADOOP-15121:
-
Attachment: HADOOP-15121.005.patch

> Encounter NullPointerException when using DecayRpcScheduler
> ---
>
> Key: HADOOP-15121
> URL: https://issues.apache.org/jira/browse/HADOOP-15121
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.2
>Reporter: Tao Jie
> Attachments: HADOOP-15121.001.patch, HADOOP-15121.002.patch, 
> HADOOP-15121.003.patch, HADOOP-15121.004.patch, HADOOP-15121.005.patch
>
>
> I set ipc.8020.scheduler.impl to org.apache.hadoop.ipc.DecayRpcScheduler, but 
> got excetion in namenode:
> {code}
> 2017-12-15 15:26:34,662 ERROR impl.MetricsSourceAdapter 
> (MetricsSourceAdapter.java:getMetrics(202)) - Error getting metrics from 
> source DecayRpcSchedulerMetrics2.ipc.8020
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getMetrics(DecayRpcScheduler.java:781)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:199)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:182)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:155)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
> at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:66)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:100)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:268)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:233)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.registerMetrics2Source(DecayRpcScheduler.java:709)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.(DecayRpcScheduler.java:685)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getInstance(DecayRpcScheduler.java:693)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler.(DecayRpcScheduler.java:236)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:102)
> at 
> org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:76)
> at org.apache.hadoop.ipc.Server.(Server.java:2612)
> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:374)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:349)
> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:415)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:755)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:697)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:905)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:884)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1610)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1678)
> {code}
> It seems that {{metricsProxy}} in DecayRpcScheduler should initiate its 
> {{delegate}} field in its Initialization method



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15121) Encounter NullPointerException when using DecayRpcScheduler

2018-01-12 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16324937#comment-16324937
 ] 

Tao Jie commented on HADOOP-15121:
--

Updated the patch which fixed the checkstyle.

> Encounter NullPointerException when using DecayRpcScheduler
> ---
>
> Key: HADOOP-15121
> URL: https://issues.apache.org/jira/browse/HADOOP-15121
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.2
>Reporter: Tao Jie
> Attachments: HADOOP-15121.001.patch, HADOOP-15121.002.patch, 
> HADOOP-15121.003.patch, HADOOP-15121.004.patch, HADOOP-15121.005.patch
>
>
> I set ipc.8020.scheduler.impl to org.apache.hadoop.ipc.DecayRpcScheduler, but 
> got excetion in namenode:
> {code}
> 2017-12-15 15:26:34,662 ERROR impl.MetricsSourceAdapter 
> (MetricsSourceAdapter.java:getMetrics(202)) - Error getting metrics from 
> source DecayRpcSchedulerMetrics2.ipc.8020
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getMetrics(DecayRpcScheduler.java:781)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:199)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:182)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:155)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
> at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:66)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:100)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:268)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:233)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.registerMetrics2Source(DecayRpcScheduler.java:709)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.(DecayRpcScheduler.java:685)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getInstance(DecayRpcScheduler.java:693)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler.(DecayRpcScheduler.java:236)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:102)
> at 
> org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:76)
> at org.apache.hadoop.ipc.Server.(Server.java:2612)
> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:374)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:349)
> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:415)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:755)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:697)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:905)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:884)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1610)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1678)
> {code}
> It seems that {{metricsProxy}} in DecayRpcScheduler should initiate its 
> {{delegate}} field in its Initialization method



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15121) Encounter NullPointerException when using DecayRpcScheduler

2018-01-03 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16309570#comment-16309570
 ] 

Tao Jie commented on HADOOP-15121:
--

[~ajayydv] thank you for your comment. I updated patch according to you 
suggestion. Please take another look at it :)

> Encounter NullPointerException when using DecayRpcScheduler
> ---
>
> Key: HADOOP-15121
> URL: https://issues.apache.org/jira/browse/HADOOP-15121
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.2
>Reporter: Tao Jie
> Attachments: HADOOP-15121.001.patch, HADOOP-15121.002.patch, 
> HADOOP-15121.003.patch, HADOOP-15121.004.patch
>
>
> I set ipc.8020.scheduler.impl to org.apache.hadoop.ipc.DecayRpcScheduler, but 
> got excetion in namenode:
> {code}
> 2017-12-15 15:26:34,662 ERROR impl.MetricsSourceAdapter 
> (MetricsSourceAdapter.java:getMetrics(202)) - Error getting metrics from 
> source DecayRpcSchedulerMetrics2.ipc.8020
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getMetrics(DecayRpcScheduler.java:781)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:199)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:182)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:155)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
> at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:66)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:100)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:268)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:233)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.registerMetrics2Source(DecayRpcScheduler.java:709)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.(DecayRpcScheduler.java:685)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getInstance(DecayRpcScheduler.java:693)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler.(DecayRpcScheduler.java:236)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:102)
> at 
> org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:76)
> at org.apache.hadoop.ipc.Server.(Server.java:2612)
> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:374)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:349)
> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:415)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:755)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:697)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:905)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:884)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1610)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1678)
> {code}
> It seems that {{metricsProxy}} in DecayRpcScheduler should initiate its 
> {{delegate}} field in its Initialization method



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15121) Encounter NullPointerException when using DecayRpcScheduler

2018-01-03 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated HADOOP-15121:
-
Attachment: HADOOP-15121.004.patch

> Encounter NullPointerException when using DecayRpcScheduler
> ---
>
> Key: HADOOP-15121
> URL: https://issues.apache.org/jira/browse/HADOOP-15121
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.2
>Reporter: Tao Jie
> Attachments: HADOOP-15121.001.patch, HADOOP-15121.002.patch, 
> HADOOP-15121.003.patch, HADOOP-15121.004.patch
>
>
> I set ipc.8020.scheduler.impl to org.apache.hadoop.ipc.DecayRpcScheduler, but 
> got excetion in namenode:
> {code}
> 2017-12-15 15:26:34,662 ERROR impl.MetricsSourceAdapter 
> (MetricsSourceAdapter.java:getMetrics(202)) - Error getting metrics from 
> source DecayRpcSchedulerMetrics2.ipc.8020
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getMetrics(DecayRpcScheduler.java:781)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:199)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:182)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:155)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
> at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:66)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:100)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:268)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:233)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.registerMetrics2Source(DecayRpcScheduler.java:709)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.(DecayRpcScheduler.java:685)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getInstance(DecayRpcScheduler.java:693)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler.(DecayRpcScheduler.java:236)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:102)
> at 
> org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:76)
> at org.apache.hadoop.ipc.Server.(Server.java:2612)
> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:374)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:349)
> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:415)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:755)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:697)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:905)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:884)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1610)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1678)
> {code}
> It seems that {{metricsProxy}} in DecayRpcScheduler should initiate its 
> {{delegate}} field in its Initialization method



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15121) Encounter NullPointerException when using DecayRpcScheduler

2017-12-21 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16301035#comment-16301035
 ] 

Tao Jie commented on HADOOP-15121:
--

[~ajayydv] [~brahmareddy] [~anu] would you mind giving the latest patch a quick 
review?

> Encounter NullPointerException when using DecayRpcScheduler
> ---
>
> Key: HADOOP-15121
> URL: https://issues.apache.org/jira/browse/HADOOP-15121
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.2
>Reporter: Tao Jie
> Attachments: HADOOP-15121.001.patch, HADOOP-15121.002.patch, 
> HADOOP-15121.003.patch
>
>
> I set ipc.8020.scheduler.impl to org.apache.hadoop.ipc.DecayRpcScheduler, but 
> got excetion in namenode:
> {code}
> 2017-12-15 15:26:34,662 ERROR impl.MetricsSourceAdapter 
> (MetricsSourceAdapter.java:getMetrics(202)) - Error getting metrics from 
> source DecayRpcSchedulerMetrics2.ipc.8020
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getMetrics(DecayRpcScheduler.java:781)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:199)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:182)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:155)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
> at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:66)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:100)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:268)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:233)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.registerMetrics2Source(DecayRpcScheduler.java:709)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.(DecayRpcScheduler.java:685)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getInstance(DecayRpcScheduler.java:693)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler.(DecayRpcScheduler.java:236)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:102)
> at 
> org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:76)
> at org.apache.hadoop.ipc.Server.(Server.java:2612)
> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:374)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:349)
> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:415)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:755)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:697)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:905)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:884)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1610)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1678)
> {code}
> It seems that {{metricsProxy}} in DecayRpcScheduler should initiate its 
> {{delegate}} field in its Initialization method



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15121) Encounter NullPointerException when using DecayRpcScheduler

2017-12-18 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16296004#comment-16296004
 ] 

Tao Jie edited comment on HADOOP-15121 at 12/19/17 1:42 AM:


[~ajayydv], thank you for your careful review. I add a catch block to handle 
any other potential NPE. Actually in this case, NPE here has already been 
caught in MetricsSourceAdapter line 201:
{code}
try {
  source.getMetrics(builder, all);
}
catch (Exception e) {
  LOG.error("Error getting metrics from source "+ name, e);
}
{code}
Also, fixed the whitespace in the latest patch.


was (Author: tao jie):
[~ajayydv], thank you for your careful review. I add a catch block to handle 
any other potential NPE. Actually in this case, NPE here has already been 
caught in MetricsSourceAdapter line 201:
{code}
try {
  source.getMetrics(builder, all);
}
catch (Exception e) {
  LOG.error("Error getting metrics from source "+ name, e);
}
{code}
Also, fixed the whitespace in the latest patch。

> Encounter NullPointerException when using DecayRpcScheduler
> ---
>
> Key: HADOOP-15121
> URL: https://issues.apache.org/jira/browse/HADOOP-15121
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.2
>Reporter: Tao Jie
> Attachments: HADOOP-15121.001.patch, HADOOP-15121.002.patch, 
> HADOOP-15121.003.patch
>
>
> I set ipc.8020.scheduler.impl to org.apache.hadoop.ipc.DecayRpcScheduler, but 
> got excetion in namenode:
> {code}
> 2017-12-15 15:26:34,662 ERROR impl.MetricsSourceAdapter 
> (MetricsSourceAdapter.java:getMetrics(202)) - Error getting metrics from 
> source DecayRpcSchedulerMetrics2.ipc.8020
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getMetrics(DecayRpcScheduler.java:781)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:199)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:182)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:155)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
> at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:66)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:100)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:268)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:233)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.registerMetrics2Source(DecayRpcScheduler.java:709)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.(DecayRpcScheduler.java:685)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getInstance(DecayRpcScheduler.java:693)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler.(DecayRpcScheduler.java:236)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:102)
> at 
> org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:76)
> at org.apache.hadoop.ipc.Server.(Server.java:2612)
> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:374)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:349)
> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:415)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:755)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:697)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:905)
> at 
> org.apache.hadoop.hdfs.

[jira] [Commented] (HADOOP-15121) Encounter NullPointerException when using DecayRpcScheduler

2017-12-18 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16296004#comment-16296004
 ] 

Tao Jie commented on HADOOP-15121:
--

[~ajayydv], thank you for your careful review. I add a catch block to handle 
any other potential NPE. Actually in this case, NPE here has already been 
caught in MetricsSourceAdapter line 201:
{code}
try {
  source.getMetrics(builder, all);
}
catch (Exception e) {
  LOG.error("Error getting metrics from source "+ name, e);
}
{code}
Also, fixed the whitespace in the latest patch。

> Encounter NullPointerException when using DecayRpcScheduler
> ---
>
> Key: HADOOP-15121
> URL: https://issues.apache.org/jira/browse/HADOOP-15121
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.2
>Reporter: Tao Jie
> Attachments: HADOOP-15121.001.patch, HADOOP-15121.002.patch, 
> HADOOP-15121.003.patch
>
>
> I set ipc.8020.scheduler.impl to org.apache.hadoop.ipc.DecayRpcScheduler, but 
> got excetion in namenode:
> {code}
> 2017-12-15 15:26:34,662 ERROR impl.MetricsSourceAdapter 
> (MetricsSourceAdapter.java:getMetrics(202)) - Error getting metrics from 
> source DecayRpcSchedulerMetrics2.ipc.8020
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getMetrics(DecayRpcScheduler.java:781)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:199)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:182)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:155)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
> at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:66)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:100)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:268)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:233)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.registerMetrics2Source(DecayRpcScheduler.java:709)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.(DecayRpcScheduler.java:685)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getInstance(DecayRpcScheduler.java:693)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler.(DecayRpcScheduler.java:236)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:102)
> at 
> org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:76)
> at org.apache.hadoop.ipc.Server.(Server.java:2612)
> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:374)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:349)
> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:415)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:755)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:697)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:905)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:884)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1610)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1678)
> {code}
> It seems that {{metricsProxy}} in DecayRpcScheduler should initiate its 
> {{delegate}} field in its Initialization method



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-

[jira] [Updated] (HADOOP-15121) Encounter NullPointerException when using DecayRpcScheduler

2017-12-18 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated HADOOP-15121:
-
Attachment: HADOOP-15121.003.patch

> Encounter NullPointerException when using DecayRpcScheduler
> ---
>
> Key: HADOOP-15121
> URL: https://issues.apache.org/jira/browse/HADOOP-15121
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.2
>Reporter: Tao Jie
> Attachments: HADOOP-15121.001.patch, HADOOP-15121.002.patch, 
> HADOOP-15121.003.patch
>
>
> I set ipc.8020.scheduler.impl to org.apache.hadoop.ipc.DecayRpcScheduler, but 
> got excetion in namenode:
> {code}
> 2017-12-15 15:26:34,662 ERROR impl.MetricsSourceAdapter 
> (MetricsSourceAdapter.java:getMetrics(202)) - Error getting metrics from 
> source DecayRpcSchedulerMetrics2.ipc.8020
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getMetrics(DecayRpcScheduler.java:781)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:199)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:182)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:155)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
> at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:66)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:100)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:268)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:233)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.registerMetrics2Source(DecayRpcScheduler.java:709)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.(DecayRpcScheduler.java:685)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getInstance(DecayRpcScheduler.java:693)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler.(DecayRpcScheduler.java:236)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:102)
> at 
> org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:76)
> at org.apache.hadoop.ipc.Server.(Server.java:2612)
> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:374)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:349)
> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:415)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:755)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:697)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:905)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:884)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1610)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1678)
> {code}
> It seems that {{metricsProxy}} in DecayRpcScheduler should initiate its 
> {{delegate}} field in its Initialization method



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15121) Encounter NullPointerException when using DecayRpcScheduler

2017-12-18 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated HADOOP-15121:
-
Affects Version/s: 2.8.2
   Status: Patch Available  (was: Open)

> Encounter NullPointerException when using DecayRpcScheduler
> ---
>
> Key: HADOOP-15121
> URL: https://issues.apache.org/jira/browse/HADOOP-15121
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.2
>Reporter: Tao Jie
> Attachments: HADOOP-15121.001.patch, HADOOP-15121.002.patch
>
>
> I set ipc.8020.scheduler.impl to org.apache.hadoop.ipc.DecayRpcScheduler, but 
> got excetion in namenode:
> {code}
> 2017-12-15 15:26:34,662 ERROR impl.MetricsSourceAdapter 
> (MetricsSourceAdapter.java:getMetrics(202)) - Error getting metrics from 
> source DecayRpcSchedulerMetrics2.ipc.8020
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getMetrics(DecayRpcScheduler.java:781)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:199)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:182)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:155)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
> at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:66)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:100)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:268)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:233)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.registerMetrics2Source(DecayRpcScheduler.java:709)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.(DecayRpcScheduler.java:685)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getInstance(DecayRpcScheduler.java:693)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler.(DecayRpcScheduler.java:236)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:102)
> at 
> org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:76)
> at org.apache.hadoop.ipc.Server.(Server.java:2612)
> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:374)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:349)
> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:415)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:755)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:697)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:905)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:884)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1610)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1678)
> {code}
> It seems that {{metricsProxy}} in DecayRpcScheduler should initiate its 
> {{delegate}} field in its Initialization method



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15121) Encounter NullPointerException when using DecayRpcScheduler

2017-12-18 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16295076#comment-16295076
 ] 

Tao Jie commented on HADOOP-15121:
--

[~ajayydv] thank you for your comments. I added a test about the NPE.
{{recomputeScheduleCache();}} is added here because I got another Warning in 
namenode log:
{code}
2017-12-18 21:55:16,535 WARN  ipc.DecayRpcScheduler 
(DecayRpcScheduler.java:getMetrics(832)) - Exception thrown while metric 
collection. Exception : null
2017-12-18 21:56:16,580 WARN  ipc.DecayRpcScheduler 
(DecayRpcScheduler.java:getMetrics(832)) - Exception thrown while metric 
collection. Exception : null
{code}
I tried to print the exception, that is 
{code}
java.lang.NullPointerException
at 
org.apache.hadoop.ipc.DecayRpcScheduler.addTopNCallerSummary(DecayRpcScheduler.java:885)
at 
org.apache.hadoop.ipc.DecayRpcScheduler.getMetrics(DecayRpcScheduler.java:827)
at 
org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getMetrics(DecayRpcScheduler.java:785)
at 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:199)
at 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:182)
at 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:155)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBeanInfo(DefaultMBeanServerInterceptor.java:1378)
at 
com.sun.jmx.mbeanserver.JmxMBeanServer.getMBeanInfo(JmxMBeanServer.java:920)
{code}
It seems {{scheduleCacheRef}} in DecayRpcScheduler is not initialized in time. 
So I call {{recomputeScheduleCache}} manually.


> Encounter NullPointerException when using DecayRpcScheduler
> ---
>
> Key: HADOOP-15121
> URL: https://issues.apache.org/jira/browse/HADOOP-15121
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tao Jie
> Attachments: HADOOP-15121.001.patch, HADOOP-15121.002.patch
>
>
> I set ipc.8020.scheduler.impl to org.apache.hadoop.ipc.DecayRpcScheduler, but 
> got excetion in namenode:
> {code}
> 2017-12-15 15:26:34,662 ERROR impl.MetricsSourceAdapter 
> (MetricsSourceAdapter.java:getMetrics(202)) - Error getting metrics from 
> source DecayRpcSchedulerMetrics2.ipc.8020
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getMetrics(DecayRpcScheduler.java:781)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:199)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:182)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:155)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
> at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:66)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:100)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:268)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:233)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.registerMetrics2Source(DecayRpcScheduler.java:709)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.(DecayRpcScheduler.java:685)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getInstance(DecayRpcScheduler.java:693)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler.(DecayRpcScheduler.java:236)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:102)
> at 
> org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:76)
> at org.apache.hadoop.ipc.Server.(Server.java:2612)
> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpc

[jira] [Updated] (HADOOP-15121) Encounter NullPointerException when using DecayRpcScheduler

2017-12-18 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated HADOOP-15121:
-
Attachment: HADOOP-15121.002.patch

> Encounter NullPointerException when using DecayRpcScheduler
> ---
>
> Key: HADOOP-15121
> URL: https://issues.apache.org/jira/browse/HADOOP-15121
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tao Jie
> Attachments: HADOOP-15121.001.patch, HADOOP-15121.002.patch
>
>
> I set ipc.8020.scheduler.impl to org.apache.hadoop.ipc.DecayRpcScheduler, but 
> got excetion in namenode:
> {code}
> 2017-12-15 15:26:34,662 ERROR impl.MetricsSourceAdapter 
> (MetricsSourceAdapter.java:getMetrics(202)) - Error getting metrics from 
> source DecayRpcSchedulerMetrics2.ipc.8020
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getMetrics(DecayRpcScheduler.java:781)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:199)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:182)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:155)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
> at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:66)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:100)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:268)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:233)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.registerMetrics2Source(DecayRpcScheduler.java:709)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.(DecayRpcScheduler.java:685)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getInstance(DecayRpcScheduler.java:693)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler.(DecayRpcScheduler.java:236)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:102)
> at 
> org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:76)
> at org.apache.hadoop.ipc.Server.(Server.java:2612)
> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:374)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:349)
> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:415)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:755)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:697)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:905)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:884)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1610)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1678)
> {code}
> It seems that {{metricsProxy}} in DecayRpcScheduler should initiate its 
> {{delegate}} field in its Initialization method



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15121) Encounter NullPointerException when using DecayRpcScheduler

2017-12-15 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16292183#comment-16292183
 ] 

Tao Jie edited comment on HADOOP-15121 at 12/15/17 8:16 AM:


I attached a patch that initiate {{delegate}} field in the initialization 
method of MetricsProxy. Also I invoke {{recomputeScheduleCache}} when 
initializing DecayRpcScheduler, in case of {{scheduleCacheRef}} not being 
initiated.
Now the namenode works well. 
I am not sure if there is any configuration not correct in my environment that 
cause the NPE. [~xyao], any suggestion?


was (Author: tao jie):
I attached a patch that initiate {{delegate}} field in the initialization 
method of MetricsProxy. Also I invoke {{recomputeScheduleCache}} when 
initializing DecayRpcScheduler, in case of {{scheduleCacheRef}} not being 
initiated.
Now the namenode works well. 
I am not sure if there is any configuration not correct in my environment that 
cause the NPE. [~xiaoyuyao], any suggestion?

> Encounter NullPointerException when using DecayRpcScheduler
> ---
>
> Key: HADOOP-15121
> URL: https://issues.apache.org/jira/browse/HADOOP-15121
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tao Jie
> Attachments: HADOOP-15121.001.patch
>
>
> I set ipc.8020.scheduler.impl to org.apache.hadoop.ipc.DecayRpcScheduler, but 
> got excetion in namenode:
> {code}
> 2017-12-15 15:26:34,662 ERROR impl.MetricsSourceAdapter 
> (MetricsSourceAdapter.java:getMetrics(202)) - Error getting metrics from 
> source DecayRpcSchedulerMetrics2.ipc.8020
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getMetrics(DecayRpcScheduler.java:781)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:199)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:182)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:155)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
> at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:66)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:100)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:268)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:233)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.registerMetrics2Source(DecayRpcScheduler.java:709)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.(DecayRpcScheduler.java:685)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getInstance(DecayRpcScheduler.java:693)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler.(DecayRpcScheduler.java:236)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:102)
> at 
> org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:76)
> at org.apache.hadoop.ipc.Server.(Server.java:2612)
> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:374)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:349)
> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:415)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:755)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:697)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:905)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:884)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(N

[jira] [Commented] (HADOOP-15121) Encounter NullPointerException when using DecayRpcScheduler

2017-12-15 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16292183#comment-16292183
 ] 

Tao Jie commented on HADOOP-15121:
--

I attached a patch that initiate {{delegate}} field in the initialization 
method of MetricsProxy. Also I invoke {{recomputeScheduleCache}} when 
initializing DecayRpcScheduler, in case of {{scheduleCacheRef}} not being 
initiated.
Now the namenode works well. 
I am not sure if there is any configuration not correct in my environment that 
cause the NPE. [~xiaoyuyao], any suggestion?

> Encounter NullPointerException when using DecayRpcScheduler
> ---
>
> Key: HADOOP-15121
> URL: https://issues.apache.org/jira/browse/HADOOP-15121
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tao Jie
> Attachments: HADOOP-15121.001.patch
>
>
> I set ipc.8020.scheduler.impl to org.apache.hadoop.ipc.DecayRpcScheduler, but 
> got excetion in namenode:
> {code}
> 2017-12-15 15:26:34,662 ERROR impl.MetricsSourceAdapter 
> (MetricsSourceAdapter.java:getMetrics(202)) - Error getting metrics from 
> source DecayRpcSchedulerMetrics2.ipc.8020
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getMetrics(DecayRpcScheduler.java:781)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:199)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:182)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:155)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
> at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:66)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:100)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:268)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:233)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.registerMetrics2Source(DecayRpcScheduler.java:709)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.(DecayRpcScheduler.java:685)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getInstance(DecayRpcScheduler.java:693)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler.(DecayRpcScheduler.java:236)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:102)
> at 
> org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:76)
> at org.apache.hadoop.ipc.Server.(Server.java:2612)
> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:374)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:349)
> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:415)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:755)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:697)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:905)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:884)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1610)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1678)
> {code}
> It seems that {{metricsProxy}} in DecayRpcScheduler should initiate its 
> {{delegate}} field in its Initialization method



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional 

[jira] [Updated] (HADOOP-15121) Encounter NullPointerException when using DecayRpcScheduler

2017-12-15 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated HADOOP-15121:
-
Attachment: HADOOP-15121.001.patch

> Encounter NullPointerException when using DecayRpcScheduler
> ---
>
> Key: HADOOP-15121
> URL: https://issues.apache.org/jira/browse/HADOOP-15121
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tao Jie
> Attachments: HADOOP-15121.001.patch
>
>
> I set ipc.8020.scheduler.impl to org.apache.hadoop.ipc.DecayRpcScheduler, but 
> got excetion in namenode:
> {code}
> 2017-12-15 15:26:34,662 ERROR impl.MetricsSourceAdapter 
> (MetricsSourceAdapter.java:getMetrics(202)) - Error getting metrics from 
> source DecayRpcSchedulerMetrics2.ipc.8020
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getMetrics(DecayRpcScheduler.java:781)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:199)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:182)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:155)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
> at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:66)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:100)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:268)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:233)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.registerMetrics2Source(DecayRpcScheduler.java:709)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.(DecayRpcScheduler.java:685)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getInstance(DecayRpcScheduler.java:693)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler.(DecayRpcScheduler.java:236)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:102)
> at 
> org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:76)
> at org.apache.hadoop.ipc.Server.(Server.java:2612)
> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:374)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:349)
> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:415)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:755)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:697)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:905)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:884)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1610)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1678)
> {code}
> It seems that {{metricsProxy}} in DecayRpcScheduler should initiate its 
> {{delegate}} field in its Initialization method



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15121) Encounter NullPointerException when using DecayRpcScheduler

2017-12-14 Thread Tao Jie (JIRA)
Tao Jie created HADOOP-15121:


 Summary: Encounter NullPointerException when using 
DecayRpcScheduler
 Key: HADOOP-15121
 URL: https://issues.apache.org/jira/browse/HADOOP-15121
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tao Jie


I set ipc.8020.scheduler.impl to org.apache.hadoop.ipc.DecayRpcScheduler, but 
got excetion in namenode:
{code}
2017-12-15 15:26:34,662 ERROR impl.MetricsSourceAdapter 
(MetricsSourceAdapter.java:getMetrics(202)) - Error getting metrics from source 
DecayRpcSchedulerMetrics2.ipc.8020
java.lang.NullPointerException
at 
org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getMetrics(DecayRpcScheduler.java:781)
at 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:199)
at 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:182)
at 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:155)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319)
at 
com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:66)
at 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222)
at 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:100)
at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:268)
at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:233)
at 
org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.registerMetrics2Source(DecayRpcScheduler.java:709)
at 
org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.(DecayRpcScheduler.java:685)
at 
org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getInstance(DecayRpcScheduler.java:693)
at 
org.apache.hadoop.ipc.DecayRpcScheduler.(DecayRpcScheduler.java:236)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:102)
at 
org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:76)
at org.apache.hadoop.ipc.Server.(Server.java:2612)
at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:374)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:349)
at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:415)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:755)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:697)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:905)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:884)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1610)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1678)
{code}
It seems that {{metricsProxy}} in DecayRpcScheduler should initiate its 
{{delegate}} field in its Initialization method




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12569) ZKFC should stop namenode before itself quit in some circumstances

2016-02-04 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated HADOOP-12569:
-
Description: 
We have met such a HA scenario:
NN1(active) and zkfc1 on node1;
NN2(standby) and zkfc2 on node2.
1,Stop network on node1, NN2 becomes active. On node1, zkfc1 kills itself since 
it cannot connect to zookeeper, but leaving NN1 still running.
2,Several minutes later, network on node1 recovers. NN1 is running but out of 
control. NN1 and NN2 both run as active nn.
Maybe zkfc should stop nn before quit in such circumstances.


  was:
We have met such a HA scenario:
NN1(active) and zkfc1 on node1;
NN2(standby) and zkfc2 on node2.
1,Stop network on node1, NN2 becomes active. On node2, zkfc2 kills itself since 
it cannot connect to zookeeper, but leaving NN1 still running.
2,Several minutes later, network on node1 recovers. NN1 is running but out of 
control. NN1 and NN2 both run as active nn.
Maybe zkfc should stop nn before quit in such circumstances.



> ZKFC should stop namenode before itself quit in some circumstances
> --
>
> Key: HADOOP-12569
> URL: https://issues.apache.org/jira/browse/HADOOP-12569
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ha
>Affects Versions: 2.6.0
>Reporter: Tao Jie
>
> We have met such a HA scenario:
> NN1(active) and zkfc1 on node1;
> NN2(standby) and zkfc2 on node2.
> 1,Stop network on node1, NN2 becomes active. On node1, zkfc1 kills itself 
> since it cannot connect to zookeeper, but leaving NN1 still running.
> 2,Several minutes later, network on node1 recovers. NN1 is running but out of 
> control. NN1 and NN2 both run as active nn.
> Maybe zkfc should stop nn before quit in such circumstances.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12569) ZKFC should stop namenode before itself quit in some circumstances

2016-02-04 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133513#comment-15133513
 ] 

Tao Jie commented on HADOOP-12569:
--

In our version, we just force kill namenode process before zkfc quit due to 
zookeeper connection failure by shell cmd.
Maybe we should add a STOP command in HAServiceProtocol?
Any thoughts?

> ZKFC should stop namenode before itself quit in some circumstances
> --
>
> Key: HADOOP-12569
> URL: https://issues.apache.org/jira/browse/HADOOP-12569
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ha
>Affects Versions: 2.6.0
>Reporter: Tao Jie
>
> We have met such a HA scenario:
> NN1(active) and zkfc1 on node1;
> NN2(standby) and zkfc2 on node2.
> 1,Stop network on node1, NN2 becomes active. On node2, zkfc2 kills itself 
> since it cannot connect to zookeeper, but leaving NN1 still running.
> 2,Several minutes later, network on node1 recovers. NN1 is running but out of 
> control. NN1 and NN2 both run as active nn.
> Maybe zkfc should stop nn before quit in such circumstances.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12569) ZKFC should stop namenode before itself quit in some circumstances

2015-11-13 Thread Tao Jie (JIRA)
Tao Jie created HADOOP-12569:


 Summary: ZKFC should stop namenode before itself quit in some 
circumstances
 Key: HADOOP-12569
 URL: https://issues.apache.org/jira/browse/HADOOP-12569
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.6.0
Reporter: Tao Jie


We have met such a HA scenario:
NN1(active) and zkfc1 on node1;
NN2(standby) and zkfc2 on node2.
1,Stop network on node1, NN2 becomes active. On node2, zkfc2 kills itself since 
it cannot connect to zookeeper, but leaving NN1 still running.
2,Several minutes later, network on node1 recovers. NN1 is running but out of 
control. NN1 and NN2 both run as active nn.
Maybe zkfc should stop nn before quit in such circumstances.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12331) FSShell Delete and Rename should operate on symlinks rather than their target

2015-08-30 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated HADOOP-12331:
-
Attachment: HADOOP-12331.003.patch

> FSShell Delete and Rename should operate on symlinks rather than their target
> -
>
> Key: HADOOP-12331
> URL: https://issues.apache.org/jira/browse/HADOOP-12331
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Minor
> Attachments: HADOOP-12331.001.patch, HADOOP-12331.002.patch, 
> HADOOP-12331.003.patch
>
>
> Currently,  we remove or rename a symlink in FSShell, symlink target is 
> affected instead of symlink itself. However FileSystem#delete and 
> FileSystem#rename can operate on symlinks rather than their targets.
> FSShell should have consistent behavior as FileSystem on symlink remove and 
> rename



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12331) FSShell Delete and Rename should operate on symlinks rather than their target

2015-08-19 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated HADOOP-12331:
-
Status: Patch Available  (was: Open)

> FSShell Delete and Rename should operate on symlinks rather than their target
> -
>
> Key: HADOOP-12331
> URL: https://issues.apache.org/jira/browse/HADOOP-12331
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Minor
> Attachments: HADOOP-12331.001.patch, HADOOP-12331.002.patch
>
>
> Currently,  we remove or rename a symlink in FSShell, symlink target is 
> affected instead of symlink itself. However FileSystem#delete and 
> FileSystem#rename can operate on symlinks rather than their targets.
> FSShell should have consistent behavior as FileSystem on symlink remove and 
> rename



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12331) FSShell Delete and Rename should operate on symlinks rather than their target

2015-08-19 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated HADOOP-12331:
-
Attachment: HADOOP-12331.002.patch

> FSShell Delete and Rename should operate on symlinks rather than their target
> -
>
> Key: HADOOP-12331
> URL: https://issues.apache.org/jira/browse/HADOOP-12331
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Minor
> Attachments: HADOOP-12331.001.patch, HADOOP-12331.002.patch
>
>
> Currently,  we remove or rename a symlink in FSShell, symlink target is 
> affected instead of symlink itself. However FileSystem#delete and 
> FileSystem#rename can operate on symlinks rather than their targets.
> FSShell should have consistent behavior as FileSystem on symlink remove and 
> rename



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12331) FSShell Delete and Rename should operate on symlinks rather than their target

2015-08-17 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie reassigned HADOOP-12331:


Assignee: Tao Jie

> FSShell Delete and Rename should operate on symlinks rather than their target
> -
>
> Key: HADOOP-12331
> URL: https://issues.apache.org/jira/browse/HADOOP-12331
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Minor
> Attachments: HADOOP-12331.001.patch
>
>
> Currently,  we remove or rename a symlink in FSShell, symlink target is 
> affected instead of symlink itself. However FileSystem#delete and 
> FileSystem#rename can operate on symlinks rather than their targets.
> FSShell should have consistent behavior as FileSystem on symlink remove and 
> rename



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12331) FSShell Delete and Rename should operate on symlinks rather than their target

2015-08-17 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated HADOOP-12331:
-
Attachment: HADOOP-12331.001.patch

> FSShell Delete and Rename should operate on symlinks rather than their target
> -
>
> Key: HADOOP-12331
> URL: https://issues.apache.org/jira/browse/HADOOP-12331
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: Tao Jie
>Priority: Minor
> Attachments: HADOOP-12331.001.patch
>
>
> Currently,  we remove or rename a symlink in FSShell, symlink target is 
> affected instead of symlink itself. However FileSystem#delete and 
> FileSystem#rename can operate on symlinks rather than their targets.
> FSShell should have consistent behavior as FileSystem on symlink remove and 
> rename



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12331) FSShell Delete and Rename should operate on symlinks rather than their target

2015-08-17 Thread Tao Jie (JIRA)
Tao Jie created HADOOP-12331:


 Summary: FSShell Delete and Rename should operate on symlinks 
rather than their target
 Key: HADOOP-12331
 URL: https://issues.apache.org/jira/browse/HADOOP-12331
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.6.0
Reporter: Tao Jie
Priority: Minor


Currently,  we remove or rename a symlink in FSShell, symlink target is 
affected instead of symlink itself. However FileSystem#delete and 
FileSystem#rename can operate on symlinks rather than their targets.
FSShell should have consistent behavior as FileSystem on symlink remove and 
rename



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9984) FileSystem#globStatus and FileSystem#listStatus should resolve symlinks by default

2015-08-17 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14699137#comment-14699137
 ] 

Tao Jie commented on HADOOP-9984:
-

Since we have flag resolveLinks in globber, will you set resolveLinks to false 
when we do delete & rename in FSShell?
FsShell delete & rename should operate on symlinks rather than their target, 
which is reported in HDFS-5021

> FileSystem#globStatus and FileSystem#listStatus should resolve symlinks by 
> default
> --
>
> Key: HADOOP-9984
> URL: https://issues.apache.org/jira/browse/HADOOP-9984
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.1.0-beta
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Critical
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-9984.001.patch, HADOOP-9984.003.patch, 
> HADOOP-9984.005.patch, HADOOP-9984.007.patch, HADOOP-9984.009.patch, 
> HADOOP-9984.010.patch, HADOOP-9984.011.patch, HADOOP-9984.012.patch, 
> HADOOP-9984.013.patch, HADOOP-9984.014.patch, HADOOP-9984.015.patch
>
>
> During the process of adding symlink support to FileSystem, we realized that 
> many existing HDFS clients would be broken by listStatus and globStatus 
> returning symlinks.  One example is applications that assume that 
> !FileStatus#isFile implies that the inode is a directory.  As we discussed in 
> HADOOP-9972 and HADOOP-9912, we should default these APIs to returning 
> resolved paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)