[jira] [Commented] (HDFS-11167) IBM java can not start secure datanode with HDFS_DATANODE_SECURE_EXTRA_OPTS "-jvm server"

2017-01-18 Thread Pan Yuxuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15829157#comment-15829157
 ] 

Pan Yuxuan commented on HDFS-11167:
---

[~brahmareddy] [~ajisakaa] 
Could you please help to review this patch? Thanks!

> IBM java can not start secure datanode with HDFS_DATANODE_SECURE_EXTRA_OPTS 
> "-jvm server"
> -
>
> Key: HDFS-11167
> URL: https://issues.apache.org/jira/browse/HDFS-11167
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha1
> Environment: IBM PowerPC
>Reporter: Pan Yuxuan
>Assignee: Pan Yuxuan
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11167-0001.patch, HDFS-11167-0002.patch
>
>
> When we run secure datanode on IBM PowerPC with IBM JDK, the jsvc wrong with 
> error
> {noformat}
> jsvc error: Invalid JVM name specified server
> {noformat}
> This is because of the  HDFS_DATANODE_SECURE_EXTRA_OPTS "-jvm server".
> For IBM JDK it is enough to run it without -jvm server.
> So I think we can check if IBM jdk in hdfs-config.sh before setting 
> HDFS_DATANODE_SECURE_EXTRA_OPTS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11167) IBM java can not start secure datanode with HDFS_DATANODE_SECURE_EXTRA_OPTS "-jvm server"

2016-11-22 Thread Pan Yuxuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pan Yuxuan updated HDFS-11167:
--
Attachment: HDFS-11167-0002.patch

fix shell check error.

> IBM java can not start secure datanode with HDFS_DATANODE_SECURE_EXTRA_OPTS 
> "-jvm server"
> -
>
> Key: HDFS-11167
> URL: https://issues.apache.org/jira/browse/HDFS-11167
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha1
> Environment: IBM PowerPC
>Reporter: Pan Yuxuan
>Assignee: Pan Yuxuan
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11167-0001.patch, HDFS-11167-0002.patch
>
>
> When we run secure datanode on IBM PowerPC with IBM JDK, the jsvc wrong with 
> error
> {noformat}
> jsvc error: Invalid JVM name specified server
> {noformat}
> This is because of the  HDFS_DATANODE_SECURE_EXTRA_OPTS "-jvm server".
> For IBM JDK it is enough to run it without -jvm server.
> So I think we can check if IBM jdk in hdfs-config.sh before setting 
> HDFS_DATANODE_SECURE_EXTRA_OPTS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11167) IBM java can not start secure datanode with HDFS_DATANODE_SECURE_EXTRA_OPTS "-jvm server"

2016-11-22 Thread Pan Yuxuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pan Yuxuan updated HDFS-11167:
--
Fix Version/s: 3.0.0-alpha2
   Status: Patch Available  (was: Open)

> IBM java can not start secure datanode with HDFS_DATANODE_SECURE_EXTRA_OPTS 
> "-jvm server"
> -
>
> Key: HDFS-11167
> URL: https://issues.apache.org/jira/browse/HDFS-11167
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha1
> Environment: IBM PowerPC
>Reporter: Pan Yuxuan
>Assignee: Pan Yuxuan
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11167-0001.patch
>
>
> When we run secure datanode on IBM PowerPC with IBM JDK, the jsvc wrong with 
> error
> {noformat}
> jsvc error: Invalid JVM name specified server
> {noformat}
> This is because of the  HDFS_DATANODE_SECURE_EXTRA_OPTS "-jvm server".
> For IBM JDK it is enough to run it without -jvm server.
> So I think we can check if IBM jdk in hdfs-config.sh before setting 
> HDFS_DATANODE_SECURE_EXTRA_OPTS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11167) IBM java can not start secure datanode with HDFS_DATANODE_SECURE_EXTRA_OPTS "-jvm server"

2016-11-22 Thread Pan Yuxuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pan Yuxuan updated HDFS-11167:
--
Attachment: HDFS-11167-0001.patch

Attach a simple patch for this.

> IBM java can not start secure datanode with HDFS_DATANODE_SECURE_EXTRA_OPTS 
> "-jvm server"
> -
>
> Key: HDFS-11167
> URL: https://issues.apache.org/jira/browse/HDFS-11167
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha1
> Environment: IBM PowerPC
>Reporter: Pan Yuxuan
>Assignee: Pan Yuxuan
> Attachments: HDFS-11167-0001.patch
>
>
> When we run secure datanode on IBM PowerPC with IBM JDK, the jsvc wrong with 
> error
> {noformat}
> jsvc error: Invalid JVM name specified server
> {noformat}
> This is because of the  HDFS_DATANODE_SECURE_EXTRA_OPTS "-jvm server".
> For IBM JDK it is enough to run it without -jvm server.
> So I think we can check if IBM jdk in hdfs-config.sh before setting 
> HDFS_DATANODE_SECURE_EXTRA_OPTS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11167) IBM java can not start secure datanode with HDFS_DATANODE_SECURE_EXTRA_OPTS "-jvm server"

2016-11-22 Thread Pan Yuxuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pan Yuxuan updated HDFS-11167:
--
Description: 
When we run secure datanode on IBM PowerPC with IBM JDK, the jsvc wrong with 
error
{noformat}
jsvc error: Invalid JVM name specified server
{noformat}
This is because of the  HDFS_DATANODE_SECURE_EXTRA_OPTS "-jvm server".
For IBM JDK it is enough to run it without -jvm server.
So I think we can check if IBM jdk in hdfs-config.sh before setting 
HDFS_DATANODE_SECURE_EXTRA_OPTS.

  was:
When we run secure datanode on IBM PowerPC with IBM JDK, the jsvc wrong with 
error
{{noformat}}
jsvc error: Invalid JVM name specified server
{{noformat}}
This is because of the  HDFS_DATANODE_SECURE_EXTRA_OPTS "-jvm server".
For IBM JDK it is enough to run it without -jvm server.
So I think we can check if IBM jdk in hdfs-config.sh before setting 
HDFS_DATANODE_SECURE_EXTRA_OPTS.


> IBM java can not start secure datanode with HDFS_DATANODE_SECURE_EXTRA_OPTS 
> "-jvm server"
> -
>
> Key: HDFS-11167
> URL: https://issues.apache.org/jira/browse/HDFS-11167
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha1
> Environment: IBM PowerPC
>Reporter: Pan Yuxuan
>Assignee: Pan Yuxuan
>
> When we run secure datanode on IBM PowerPC with IBM JDK, the jsvc wrong with 
> error
> {noformat}
> jsvc error: Invalid JVM name specified server
> {noformat}
> This is because of the  HDFS_DATANODE_SECURE_EXTRA_OPTS "-jvm server".
> For IBM JDK it is enough to run it without -jvm server.
> So I think we can check if IBM jdk in hdfs-config.sh before setting 
> HDFS_DATANODE_SECURE_EXTRA_OPTS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11167) IBM java can not start secure datanode with HDFS_DATANODE_SECURE_EXTRA_OPTS "-jvm server"

2016-11-22 Thread Pan Yuxuan (JIRA)
Pan Yuxuan created HDFS-11167:
-

 Summary: IBM java can not start secure datanode with 
HDFS_DATANODE_SECURE_EXTRA_OPTS "-jvm server"
 Key: HDFS-11167
 URL: https://issues.apache.org/jira/browse/HDFS-11167
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: scripts
Affects Versions: 3.0.0-alpha1
 Environment: IBM PowerPC

Reporter: Pan Yuxuan
Assignee: Pan Yuxuan


When we run secure datanode on IBM PowerPC with IBM JDK, the jsvc wrong with 
error
{{noformat}}
jsvc error: Invalid JVM name specified server
{{noformat}}
This is because of the  HDFS_DATANODE_SECURE_EXTRA_OPTS "-jvm server".
For IBM JDK it is enough to run it without -jvm server.
So I think we can check if IBM jdk in hdfs-config.sh before setting 
HDFS_DATANODE_SECURE_EXTRA_OPTS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10760) DataXceiver#run() should not log InvalidToken exception as an error

2016-08-30 Thread Pan Yuxuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15448202#comment-15448202
 ] 

Pan Yuxuan commented on HDFS-10760:
---

[~brahmareddy]   [~ajisakaa]   [~shahrs87]  
Could anyone please help to review this simple patch? Thanks!

> DataXceiver#run() should not log InvalidToken exception as an error
> ---
>
> Key: HDFS-10760
> URL: https://issues.apache.org/jira/browse/HDFS-10760
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.2
>Reporter: Pan Yuxuan
>Assignee: Pan Yuxuan
> Attachments: HADOOP-13492.patch, HDFS-10760-1.patch
>
>
> DataXceiver#run() just log InvalidToken exception as an error.
> When client has an expired token and just refetch a new token, the DN log 
> will has an error like below:
> {noformat}
> 2016-08-11 02:41:09,817 ERROR datanode.DataNode (DataXceiver.java:run(269)) - 
> XXX:50010:DataXceiver error processing READ_BLOCK operation  src: 
> /10.17.1.5:38844 dst: /10.17.1.5:50010
> org.apache.hadoop.security.token.SecretManager$InvalidToken: Block token with 
> block_token_identifier (expiryDate=1470850746803, keyId=-2093956963, 
> userId=hbase, blockPoolId=BP-641703426-10.17.1.2-1468517918886, 
> blockId=1077120201, access modes=[READ]) is expired.
> at 
> org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager.checkAccess(BlockTokenSecretManager.java:280)
> at 
> org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager.checkAccess(BlockTokenSecretManager.java:301)
> at 
> org.apache.hadoop.hdfs.security.token.block.BlockPoolTokenSecretManager.checkAccess(BlockPoolTokenSecretManager.java:97)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.checkAccess(DataXceiver.java:1236)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:481)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:242)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> This is not a server error and the DataXceiver#checkAccess() has already 
> loged the InvalidToken as a warning.
> A simple fix by catching the InvalidToken exception in DataXceiver#run(), 
> only keeping the warning logged by DataXceiver#checkAccess() in the DN log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10760) DataXceiver#run() should not log InvalidToken exception as an error

2016-08-16 Thread Pan Yuxuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pan Yuxuan updated HDFS-10760:
--
Description: 
DataXceiver#run() just log InvalidToken exception as an error.
When client has an expired token and just refetch a new token, the DN log will 
has an error like below:
{noformat}
2016-08-11 02:41:09,817 ERROR datanode.DataNode (DataXceiver.java:run(269)) - 
XXX:50010:DataXceiver error processing READ_BLOCK operation  src: 
/10.17.1.5:38844 dst: /10.17.1.5:50010
org.apache.hadoop.security.token.SecretManager$InvalidToken: Block token with 
block_token_identifier (expiryDate=1470850746803, keyId=-2093956963, 
userId=hbase, blockPoolId=BP-641703426-10.17.1.2-1468517918886, 
blockId=1077120201, access modes=[READ]) is expired.
at 
org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager.checkAccess(BlockTokenSecretManager.java:280)
at 
org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager.checkAccess(BlockTokenSecretManager.java:301)
at 
org.apache.hadoop.hdfs.security.token.block.BlockPoolTokenSecretManager.checkAccess(BlockPoolTokenSecretManager.java:97)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.checkAccess(DataXceiver.java:1236)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:481)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:242)
at java.lang.Thread.run(Thread.java:745)
{noformat}
This is not a server error and the DataXceiver#checkAccess() has already loged 
the InvalidToken as a warning.
A simple fix by catching the InvalidToken exception in DataXceiver#run(), only 
keeping the warning logged by DataXceiver#checkAccess() in the DN log.

  was:
DataXceiver#run() just log InvalidToken exception as an error.
When client has an expired token and just refetch a new token, the DN log will 
has an error like below:
{noformat}
2016-08-11 02:41:09,817 ERROR datanode.DataNode (DataXceiver.java:run(269)) - 
XXX:50010:DataXceiver error processing READ_BLOCK operation  src: 
/10.17.1.5:38844 dst: /10.17.1.5:50010
org.apache.hadoop.security.token.SecretManager$InvalidToken: Block token with 
block_token_identifier (expiryDate=1470850746803, keyId=-2093956963, 
userId=hbase, blockPoolId=BP-641703426-10.17.1.2-1468517918886, 
blockId=1077120201, access modes=[READ]) is expired.
at 
org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager.checkAccess(BlockTokenSecretManager.java:280)
at 
org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager.checkAccess(BlockTokenSecretManager.java:301)
at 
org.apache.hadoop.hdfs.security.token.block.BlockPoolTokenSecretManager.checkAccess(BlockPoolTokenSecretManager.java:97)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.checkAccess(DataXceiver.java:1236)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:481)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:242)
at java.lang.Thread.run(Thread.java:745)
{noformat}
This is not a server error and the DataXceiver#checkAccess() has already loged 
the InvalidToken as a warning.


> DataXceiver#run() should not log InvalidToken exception as an error
> ---
>
> Key: HDFS-10760
> URL: https://issues.apache.org/jira/browse/HDFS-10760
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.2
>Reporter: Pan Yuxuan
>Assignee: Pan Yuxuan
> Attachments: HADOOP-13492.patch, HDFS-10760-1.patch
>
>
> DataXceiver#run() just log InvalidToken exception as an error.
> When client has an expired token and just refetch a new token, the DN log 
> will has an error like below:
> {noformat}
> 2016-08-11 02:41:09,817 ERROR datanode.DataNode (DataXceiver.java:run(269)) - 
> XXX:50010:DataXceiver error processing READ_BLOCK operation  src: 
> /10.17.1.5:38844 dst: /10.17.1.5:50010
> org.apache.hadoop.security.token.SecretManager$InvalidToken: Block token with 
> block_token_identifier (expiryDate=1470850746803, keyId=-2093956963, 
> userId=hbase, blockPoolId=BP-641703426-10.17.1.2-1468517918886, 
> blockId=1077120201, access modes=[READ]) is expired.
> at 
> org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager.checkAccess(BlockTokenSecretManager.java:280)
> at 
> 

[jira] [Commented] (HDFS-10760) DataXceiver#run() should not log InvalidToken exception as an error

2016-08-16 Thread Pan Yuxuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15422312#comment-15422312
 ] 

Pan Yuxuan commented on HDFS-10760:
---

Test Failure is not related to this patch

> DataXceiver#run() should not log InvalidToken exception as an error
> ---
>
> Key: HDFS-10760
> URL: https://issues.apache.org/jira/browse/HDFS-10760
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.2
>Reporter: Pan Yuxuan
>Assignee: Pan Yuxuan
> Attachments: HADOOP-13492.patch, HDFS-10760-1.patch
>
>
> DataXceiver#run() just log InvalidToken exception as an error.
> When client has an expired token and just refetch a new token, the DN log 
> will has an error like below:
> {noformat}
> 2016-08-11 02:41:09,817 ERROR datanode.DataNode (DataXceiver.java:run(269)) - 
> XXX:50010:DataXceiver error processing READ_BLOCK operation  src: 
> /10.17.1.5:38844 dst: /10.17.1.5:50010
> org.apache.hadoop.security.token.SecretManager$InvalidToken: Block token with 
> block_token_identifier (expiryDate=1470850746803, keyId=-2093956963, 
> userId=hbase, blockPoolId=BP-641703426-10.17.1.2-1468517918886, 
> blockId=1077120201, access modes=[READ]) is expired.
> at 
> org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager.checkAccess(BlockTokenSecretManager.java:280)
> at 
> org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager.checkAccess(BlockTokenSecretManager.java:301)
> at 
> org.apache.hadoop.hdfs.security.token.block.BlockPoolTokenSecretManager.checkAccess(BlockPoolTokenSecretManager.java:97)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.checkAccess(DataXceiver.java:1236)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:481)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:242)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> This is not a server error and the DataXceiver#checkAccess() has already 
> loged the InvalidToken as a warning.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10760) DataXceiver#run() should not log InvalidToken exception as an error

2016-08-16 Thread Pan Yuxuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pan Yuxuan updated HDFS-10760:
--
Affects Version/s: (was: 3.0.0-alpha1)
   2.7.2

> DataXceiver#run() should not log InvalidToken exception as an error
> ---
>
> Key: HDFS-10760
> URL: https://issues.apache.org/jira/browse/HDFS-10760
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.2
>Reporter: Pan Yuxuan
>Assignee: Pan Yuxuan
> Attachments: HADOOP-13492.patch, HDFS-10760-1.patch
>
>
> DataXceiver#run() just log InvalidToken exception as an error.
> When client has an expired token and just refetch a new token, the DN log 
> will has an error like below:
> {noformat}
> 2016-08-11 02:41:09,817 ERROR datanode.DataNode (DataXceiver.java:run(269)) - 
> XXX:50010:DataXceiver error processing READ_BLOCK operation  src: 
> /10.17.1.5:38844 dst: /10.17.1.5:50010
> org.apache.hadoop.security.token.SecretManager$InvalidToken: Block token with 
> block_token_identifier (expiryDate=1470850746803, keyId=-2093956963, 
> userId=hbase, blockPoolId=BP-641703426-10.17.1.2-1468517918886, 
> blockId=1077120201, access modes=[READ]) is expired.
> at 
> org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager.checkAccess(BlockTokenSecretManager.java:280)
> at 
> org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager.checkAccess(BlockTokenSecretManager.java:301)
> at 
> org.apache.hadoop.hdfs.security.token.block.BlockPoolTokenSecretManager.checkAccess(BlockPoolTokenSecretManager.java:97)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.checkAccess(DataXceiver.java:1236)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:481)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:242)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> This is not a server error and the DataXceiver#checkAccess() has already 
> loged the InvalidToken as a warning.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10760) DataXceiver#run() should not log InvalidToken exception as an error

2016-08-14 Thread Pan Yuxuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pan Yuxuan updated HDFS-10760:
--
Attachment: HDFS-10760-1.patch

Uploaded a new patch with checkstyle fixes.


> DataXceiver#run() should not log InvalidToken exception as an error
> ---
>
> Key: HDFS-10760
> URL: https://issues.apache.org/jira/browse/HDFS-10760
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Pan Yuxuan
>Assignee: Pan Yuxuan
> Attachments: HADOOP-13492.patch, HDFS-10760-1.patch
>
>
> DataXceiver#run() just log InvalidToken exception as an error.
> When client has an expired token and just refetch a new token, the DN log 
> will has an error like below:
> {noformat}
> 2016-08-11 02:41:09,817 ERROR datanode.DataNode (DataXceiver.java:run(269)) - 
> XXX:50010:DataXceiver error processing READ_BLOCK operation  src: 
> /10.17.1.5:38844 dst: /10.17.1.5:50010
> org.apache.hadoop.security.token.SecretManager$InvalidToken: Block token with 
> block_token_identifier (expiryDate=1470850746803, keyId=-2093956963, 
> userId=hbase, blockPoolId=BP-641703426-10.17.1.2-1468517918886, 
> blockId=1077120201, access modes=[READ]) is expired.
> at 
> org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager.checkAccess(BlockTokenSecretManager.java:280)
> at 
> org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager.checkAccess(BlockTokenSecretManager.java:301)
> at 
> org.apache.hadoop.hdfs.security.token.block.BlockPoolTokenSecretManager.checkAccess(BlockPoolTokenSecretManager.java:97)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.checkAccess(DataXceiver.java:1236)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:481)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:242)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> This is not a server error and the DataXceiver#checkAccess() has already 
> loged the InvalidToken as a warning.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10582) Change deprecated configuration fs.checkpoint.dir to dfs.namenode.checkpoint.dir in HDFS Commands Doc

2016-07-06 Thread Pan Yuxuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pan Yuxuan updated HDFS-10582:
--
Attachment: HDFS-10582.1.patch

Update the patch for trunk, for your kindly review.

> Change deprecated configuration fs.checkpoint.dir to 
> dfs.namenode.checkpoint.dir in HDFS Commands Doc
> -
>
> Key: HDFS-10582
> URL: https://issues.apache.org/jira/browse/HDFS-10582
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.2
>Reporter: Pan Yuxuan
>Priority: Minor
> Attachments: HDFS-10582.1.patch, HDFS-10582.patch
>
>
> HDFS Commands Documentation -importCheckpoint uses the deprecated 
> configuration string {code}fs.checkpoint.dir{code} we can use 
> {noformat}dfs.namenode.checkpoint.dir{noformat} instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10582) Change deprecated configuration fs.checkpoint.dir to dfs.namenode.checkpoint.dir in HDFS Commands Doc

2016-07-05 Thread Pan Yuxuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15362112#comment-15362112
 ] 

Pan Yuxuan commented on HDFS-10582:
---

Fix for review!

> Change deprecated configuration fs.checkpoint.dir to 
> dfs.namenode.checkpoint.dir in HDFS Commands Doc
> -
>
> Key: HDFS-10582
> URL: https://issues.apache.org/jira/browse/HDFS-10582
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.2
>Reporter: Pan Yuxuan
>Priority: Minor
> Attachments: HDFS-10582.patch
>
>
> HDFS Commands Documentation -importCheckpoint uses the deprecated 
> configuration string {code}fs.checkpoint.dir{code} we can use 
> {noformat}dfs.namenode.checkpoint.dir{noformat} instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10582) Change deprecated configuration fs.checkpoint.dir to dfs.namenode.checkpoint.dir in HDFS Commands Doc

2016-06-27 Thread Pan Yuxuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pan Yuxuan updated HDFS-10582:
--
Description: HDFS Commands Documentation -importCheckpoint uses the 
deprecated configuration string {code}fs.checkpoint.dir{code} we can use 
{noformat}dfs.namenode.checkpoint.dir{noformat} instead.  (was: HDFS Commands 
Documentation -importCheckpoint uses the deprecated configuration string 
{noformat}fs.checkpoint.dir{noformat}, we can use 
{noformat}dfs.namenode.checkpoint.dir{noformat} instead.)

> Change deprecated configuration fs.checkpoint.dir to 
> dfs.namenode.checkpoint.dir in HDFS Commands Doc
> -
>
> Key: HDFS-10582
> URL: https://issues.apache.org/jira/browse/HDFS-10582
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.2
>Reporter: Pan Yuxuan
>Priority: Minor
> Attachments: HDFS-10582.patch
>
>
> HDFS Commands Documentation -importCheckpoint uses the deprecated 
> configuration string {code}fs.checkpoint.dir{code} we can use 
> {noformat}dfs.namenode.checkpoint.dir{noformat} instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10582) Change deprecated configuration fs.checkpoint.dir to dfs.namenode.checkpoint.dir in HDFS Commands Doc

2016-06-27 Thread Pan Yuxuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pan Yuxuan updated HDFS-10582:
--
Attachment: HDFS-10582.patch

> Change deprecated configuration fs.checkpoint.dir to 
> dfs.namenode.checkpoint.dir in HDFS Commands Doc
> -
>
> Key: HDFS-10582
> URL: https://issues.apache.org/jira/browse/HDFS-10582
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.2
>Reporter: Pan Yuxuan
>Priority: Minor
> Attachments: HDFS-10582.patch
>
>
> HDFS Commands Documentation -importCheckpoint uses the deprecated 
> configuration string {noformat}fs.checkpoint.dir{noformat}, we can use 
> {noformat}dfs.namenode.checkpoint.dir{noformat} instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10582) Change deprecated configuration fs.checkpoint.dir to dfs.namenode.checkpoint.dir in HDFS Commands Doc

2016-06-27 Thread Pan Yuxuan (JIRA)
Pan Yuxuan created HDFS-10582:
-

 Summary: Change deprecated configuration fs.checkpoint.dir to 
dfs.namenode.checkpoint.dir in HDFS Commands Doc
 Key: HDFS-10582
 URL: https://issues.apache.org/jira/browse/HDFS-10582
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.7.2
Reporter: Pan Yuxuan
Priority: Minor


HDFS Commands Documentation -importCheckpoint uses the deprecated configuration 
string {noformat}fs.checkpoint.dir{noformat}, we can use 
{noformat}dfs.namenode.checkpoint.dir{noformat} instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8738) Limit Exceptions thrown by DataNode when a client makes socket connection and sends an empty message

2016-06-27 Thread Pan Yuxuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15350667#comment-15350667
 ] 

Pan Yuxuan commented on HDFS-8738:
--

+1 for get this error when using ambari metrics monitor to check dn process.
And can we downgrade the log level to debug when the Exception instance of 
EOFException?

> Limit Exceptions thrown by DataNode when a client makes socket connection and 
> sends an empty message
> 
>
> Key: HDFS-8738
> URL: https://issues.apache.org/jira/browse/HDFS-8738
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Rajesh Kartha
>Assignee: Rajesh Kartha
>Priority: Minor
> Attachments: HDFS-8738.001.patch
>
>
> When a client creates a socket connection to the Datanode and sends an empty 
> message, the datanode logs have exceptions like these:
> 2015-07-08 20:00:55,427 ERROR datanode.DataNode (DataXceiver.java:run(278)) - 
> bidev17.rtp.ibm.com:50010:DataXceiver error processing unknown operation  
> src: /127.0.0.1:41508 dst: /127.0.0.1:50010
> java.io.EOFException
> at java.io.DataInputStream.readShort(DataInputStream.java:315)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:227)
> at java.lang.Thread.run(Thread.java:745)
> 2015-07-08 20:00:56,671 ERROR datanode.DataNode (DataXceiver.java:run(278)) - 
> bidev17.rtp.ibm.com:50010:DataXceiver error processing unknown operation  
> src: /127.0.0.1:41509 dst: /127.0.0.1:50010
> java.io.EOFException
> at java.io.DataInputStream.readShort(DataInputStream.java:315)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:227)
> at java.lang.Thread.run(Thread.java:745)
> These can fill up the logs and was recently noticed with an Ambari 2.1 based 
> install which tries to check if the datanode is up.
> Can be easily reproduced with a simple Java client creating a Socket 
> connection:
> public static void main(String[] args) {
> Socket DNClient;
> try {
> DNClient = new Socket("127.0.0.1", 50010);
> DataOutputStream os= new 
> DataOutputStream(DNClient.getOutputStream());
> os.writeBytes("");
> os.close();
> } catch (UnknownHostException e) {
> // TODO Auto-generated catch block
> e.printStackTrace();
> } catch (IOException e) {
> // TODO Auto-generated catch block
> e.printStackTrace();
> }
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9902) dfs.datanode.du.reserved should be difference between StorageType DISK and RAM_DISK

2016-03-07 Thread Pan Yuxuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15184151#comment-15184151
 ] 

Pan Yuxuan commented on HDFS-9902:
--

[~brahmareddy] thanks for your quick reply and fix this issue.
I think the general config make sense for me. 

> dfs.datanode.du.reserved should be difference between StorageType DISK and 
> RAM_DISK
> ---
>
> Key: HDFS-9902
> URL: https://issues.apache.org/jira/browse/HDFS-9902
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.2
>Reporter: Pan Yuxuan
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9902.patch
>
>
> Now Hadoop support different storage type for DISK, SSD, ARCHIVE and 
> RAM_DISK, but they share one configuration dfs.datanode.du.reserved.
> The DISK size may be several TB and the RAM_DISK size may be only several 
> tens of GB.
> The problem is that when I configure DISK and RAM_DISK (tmpfs) in the same 
> DN, and I set  dfs.datanode.du.reserved values 10GB, this will waste a lot of 
> RAM_DISK size. 
> Since the usage of RAM_DISK can be 100%, so I don't want 
> dfs.datanode.du.reserved configured for DISK impacts the usage of tmpfs.
> So can we make a new configuration for RAM_DISK or just skip this 
> configuration for RAM_DISK?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9902) dfs.datanode.du.reserved should be difference between StorageType DISK and RAM_DISK

2016-03-03 Thread Pan Yuxuan (JIRA)
Pan Yuxuan created HDFS-9902:


 Summary: dfs.datanode.du.reserved should be difference between 
StorageType DISK and RAM_DISK
 Key: HDFS-9902
 URL: https://issues.apache.org/jira/browse/HDFS-9902
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 2.7.2
Reporter: Pan Yuxuan


Now Hadoop support different storage type for DISK, SSD, ARCHIVE and RAM_DISK, 
but they share one configuration dfs.datanode.du.reserved.
The DISK size may be several TB and the RAM_DISK size may be only several tens 
of GB.
The problem is that when I configure DISK and RAM_DISK (tmpfs) in the same DN, 
and I set  dfs.datanode.du.reserved values 10GB, this will waste a lot of 
RAM_DISK size. 
Since the usage of RAM_DISK can be 100%, so I don't want 
dfs.datanode.du.reserved configured for DISK impacts the usage of tmpfs.
So can we make a new configuration for RAM_DISK or just skip this configuration 
for RAM_DISK?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)