[jira] [Commented] (HDFS-10690) Optimize insertion/removal of replica in ShortCircuitCache.java

2016-07-28 Thread Fenghua Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398762#comment-15398762
 ] 

Fenghua Hu commented on HDFS-10690:
---

Patch updated.

> Optimize insertion/removal of replica in ShortCircuitCache.java
> ---
>
> Key: HDFS-10690
> URL: https://issues.apache.org/jira/browse/HDFS-10690
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.0.0-alpha2
>Reporter: Fenghua Hu
>Assignee: Fenghua Hu
> Attachments: HDFS-10690.001.patch, HDFS-10690.002.patch
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Currently in ShortCircuitCache, two TreeMap objects are used to track the 
> cached replicas.
> private final TreeMap evictable = new TreeMap<>();
> private final TreeMap evictableMmapped = new 
> TreeMap<>();
> TreeMap employs Red-Black tree for sorting. This isn't an issue when using 
> traditional HDD. But when using high-performance SSD/PCIe Flash, the cost 
> inserting/removing an entry  becomes considerable.
> To mitigate it, we designed a new list-based for replica tracking.
> The list is a double-linked FIFO. FIFO is time-based, thus insertion is a 
> very low cost operation. On the other hand, list is not lookup-friendly. To 
> address this issue, we introduce two references into ShortCircuitReplica 
> object.
> ShortCircuitReplica next = null;
> ShortCircuitReplica prev = null;
> In this way, lookup is not needed when removing a replica from the list. We 
> only need to modify its predecessor's and successor's references in the lists.
> Our tests showed up to 15-50% performance improvement when using PCIe flash 
> as storage media.
> The original patch is against 2.6.4, now I am porting to Hadoop trunk, and 
> patch will be posted soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10690) Optimize insertion/removal of replica in ShortCircuitCache.java

2016-07-28 Thread Fenghua Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fenghua Hu updated HDFS-10690:
--
Attachment: HDFS-10690.002.patch

> Optimize insertion/removal of replica in ShortCircuitCache.java
> ---
>
> Key: HDFS-10690
> URL: https://issues.apache.org/jira/browse/HDFS-10690
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.0.0-alpha2
>Reporter: Fenghua Hu
>Assignee: Fenghua Hu
> Attachments: HDFS-10690.001.patch, HDFS-10690.002.patch
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Currently in ShortCircuitCache, two TreeMap objects are used to track the 
> cached replicas.
> private final TreeMap evictable = new TreeMap<>();
> private final TreeMap evictableMmapped = new 
> TreeMap<>();
> TreeMap employs Red-Black tree for sorting. This isn't an issue when using 
> traditional HDD. But when using high-performance SSD/PCIe Flash, the cost 
> inserting/removing an entry  becomes considerable.
> To mitigate it, we designed a new list-based for replica tracking.
> The list is a double-linked FIFO. FIFO is time-based, thus insertion is a 
> very low cost operation. On the other hand, list is not lookup-friendly. To 
> address this issue, we introduce two references into ShortCircuitReplica 
> object.
> ShortCircuitReplica next = null;
> ShortCircuitReplica prev = null;
> In this way, lookup is not needed when removing a replica from the list. We 
> only need to modify its predecessor's and successor's references in the lists.
> Our tests showed up to 15-50% performance improvement when using PCIe flash 
> as storage media.
> The original patch is against 2.6.4, now I am porting to Hadoop trunk, and 
> patch will be posted soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-7343) A comprehensive and flexible storage policy engine

2016-07-28 Thread Tim Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Yao reassigned HDFS-7343:
-

Assignee: Tim Yao  (was: Kai Zheng)

> A comprehensive and flexible storage policy engine
> --
>
> Key: HDFS-7343
> URL: https://issues.apache.org/jira/browse/HDFS-7343
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Kai Zheng
>Assignee: Tim Yao
>
> As discussed in HDFS-7285, it would be better to have a comprehensive and 
> flexible storage policy engine considering file attributes, metadata, data 
> temperature, storage type, EC codec, available hardware capabilities, 
> user/application preference and etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-4176) EditLogTailer should call rollEdits with a timeout

2016-07-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398516#comment-15398516
 ] 

Hadoop QA commented on HDFS-4176:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
59s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
36s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
0s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 47m 51s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}125m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_101 Failed junit tests | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.server.blockmanagement.TestComputeInvalidateWork |
|   | hadoop.hdfs.server.datanode.TestLargeBlockReport |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
| JDK v1.7.0_101 Failed junit tests | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820805/HDFS-4176-branch-2.0.patch
 |
| JIRA Issue | HDFS-4176 |
| Optional Tests |  asflicense 

[jira] [Commented] (HDFS-10641) TestBlockManager#testBlockReportQueueing fails intermittently

2016-07-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398494#comment-15398494
 ] 

Hadoop QA commented on HDFS-10641:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 78m 
57s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 99m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820800/HDFS-10641.001.patch |
| JIRA Issue | HDFS-10641 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux dc2f651bddf5 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / a1890c3 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16242/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16242/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestBlockManager#testBlockReportQueueing fails intermittently
> -
>
> Key: HDFS-10641
> URL: https://issues.apache.org/jira/browse/HDFS-10641
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode, test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-10641.000.patch, HDFS-10641.001.patch
>
>
> See example failing [stack 
> 

[jira] [Commented] (HDFS-9276) Failed to Update HDFS Delegation Token for long running application in HA mode

2016-07-28 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398487#comment-15398487
 ] 

John Zhuge commented on HDFS-9276:
--

Thanks [~xiaochen]!

> Failed to Update HDFS Delegation Token for long running application in HA mode
> --
>
> Key: HDFS-9276
> URL: https://issues.apache.org/jira/browse/HDFS-9276
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs, ha, security
>Affects Versions: 2.7.1
>Reporter: Liangliang Gu
>Assignee: Liangliang Gu
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HDFS-9276.01.patch, HDFS-9276.02.patch, 
> HDFS-9276.03.patch, HDFS-9276.04.patch, HDFS-9276.05.patch, 
> HDFS-9276.06.patch, HDFS-9276.07.patch, HDFS-9276.08.patch, 
> HDFS-9276.09.patch, HDFS-9276.10.patch, HDFS-9276.11.patch, 
> HDFS-9276.12.patch, HDFS-9276.13.patch, HDFS-9276.14.patch, 
> HDFS-9276.15.patch, HDFS-9276.16.patch, HDFS-9276.17.patch, 
> HDFS-9276.18.patch, HDFS-9276.19.patch, HDFS-9276.20.patch, 
> HDFSReadLoop.scala, debug1.PNG, debug2.PNG
>
>
> The Scenario is as follows:
> 1. NameNode HA is enabled.
> 2. Kerberos is enabled.
> 3. HDFS Delegation Token (not Keytab or TGT) is used to communicate with 
> NameNode.
> 4. We want to update the HDFS Delegation Token for long running applicatons. 
> HDFS Client will generate private tokens for each NameNode. When we update 
> the HDFS Delegation Token, these private tokens will not be updated, which 
> will cause token expired.
> This bug can be reproduced by the following program:
> {code}
> import java.security.PrivilegedExceptionAction
> import org.apache.hadoop.conf.Configuration
> import org.apache.hadoop.fs.{FileSystem, Path}
> import org.apache.hadoop.security.UserGroupInformation
> object HadoopKerberosTest {
>   def main(args: Array[String]): Unit = {
> val keytab = "/path/to/keytab/xxx.keytab"
> val principal = "x...@abc.com"
> val creds1 = new org.apache.hadoop.security.Credentials()
> val ugi1 = 
> UserGroupInformation.loginUserFromKeytabAndReturnUGI(principal, keytab)
> ugi1.doAs(new PrivilegedExceptionAction[Void] {
>   // Get a copy of the credentials
>   override def run(): Void = {
> val fs = FileSystem.get(new Configuration())
> fs.addDelegationTokens("test", creds1)
> null
>   }
> })
> val ugi = UserGroupInformation.createRemoteUser("test")
> ugi.addCredentials(creds1)
> ugi.doAs(new PrivilegedExceptionAction[Void] {
>   // Get a copy of the credentials
>   override def run(): Void = {
> var i = 0
> while (true) {
>   val creds1 = new org.apache.hadoop.security.Credentials()
>   val ugi1 = 
> UserGroupInformation.loginUserFromKeytabAndReturnUGI(principal, keytab)
>   ugi1.doAs(new PrivilegedExceptionAction[Void] {
> // Get a copy of the credentials
> override def run(): Void = {
>   val fs = FileSystem.get(new Configuration())
>   fs.addDelegationTokens("test", creds1)
>   null
> }
>   })
>   UserGroupInformation.getCurrentUser.addCredentials(creds1)
>   val fs = FileSystem.get( new Configuration())
>   i += 1
>   println()
>   println(i)
>   println(fs.listFiles(new Path("/user"), false))
>   Thread.sleep(60 * 1000)
> }
> null
>   }
> })
>   }
> }
> {code}
> To reproduce the bug, please set the following configuration to Name Node:
> {code}
> dfs.namenode.delegation.token.max-lifetime = 10min
> dfs.namenode.delegation.key.update-interval = 3min
> dfs.namenode.delegation.token.renew-interval = 3min
> {code}
> The bug will occure after 3 minutes.
> The stacktrace is:
> {code}
> Exception in thread "main" 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
>  token (HDFS_DELEGATION_TOKEN token 330156 for test) is expired
>   at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>   at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:651)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> 

[jira] [Commented] (HDFS-9276) Failed to Update HDFS Delegation Token for long running application in HA mode

2016-07-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398473#comment-15398473
 ] 

Hudson commented on HDFS-9276:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #10174 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10174/])
HDFS-9276. Failed to Update HDFS Delegation Token for long running (xiao: rev 
d9aae22fdf2ab22ae8ce4a9d32ac71b3dde084d3)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDelegationTokensWithHA.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/Token.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/Credentials.java


> Failed to Update HDFS Delegation Token for long running application in HA mode
> --
>
> Key: HDFS-9276
> URL: https://issues.apache.org/jira/browse/HDFS-9276
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs, ha, security
>Affects Versions: 2.7.1
>Reporter: Liangliang Gu
>Assignee: Liangliang Gu
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HDFS-9276.01.patch, HDFS-9276.02.patch, 
> HDFS-9276.03.patch, HDFS-9276.04.patch, HDFS-9276.05.patch, 
> HDFS-9276.06.patch, HDFS-9276.07.patch, HDFS-9276.08.patch, 
> HDFS-9276.09.patch, HDFS-9276.10.patch, HDFS-9276.11.patch, 
> HDFS-9276.12.patch, HDFS-9276.13.patch, HDFS-9276.14.patch, 
> HDFS-9276.15.patch, HDFS-9276.16.patch, HDFS-9276.17.patch, 
> HDFS-9276.18.patch, HDFS-9276.19.patch, HDFS-9276.20.patch, 
> HDFSReadLoop.scala, debug1.PNG, debug2.PNG
>
>
> The Scenario is as follows:
> 1. NameNode HA is enabled.
> 2. Kerberos is enabled.
> 3. HDFS Delegation Token (not Keytab or TGT) is used to communicate with 
> NameNode.
> 4. We want to update the HDFS Delegation Token for long running applicatons. 
> HDFS Client will generate private tokens for each NameNode. When we update 
> the HDFS Delegation Token, these private tokens will not be updated, which 
> will cause token expired.
> This bug can be reproduced by the following program:
> {code}
> import java.security.PrivilegedExceptionAction
> import org.apache.hadoop.conf.Configuration
> import org.apache.hadoop.fs.{FileSystem, Path}
> import org.apache.hadoop.security.UserGroupInformation
> object HadoopKerberosTest {
>   def main(args: Array[String]): Unit = {
> val keytab = "/path/to/keytab/xxx.keytab"
> val principal = "x...@abc.com"
> val creds1 = new org.apache.hadoop.security.Credentials()
> val ugi1 = 
> UserGroupInformation.loginUserFromKeytabAndReturnUGI(principal, keytab)
> ugi1.doAs(new PrivilegedExceptionAction[Void] {
>   // Get a copy of the credentials
>   override def run(): Void = {
> val fs = FileSystem.get(new Configuration())
> fs.addDelegationTokens("test", creds1)
> null
>   }
> })
> val ugi = UserGroupInformation.createRemoteUser("test")
> ugi.addCredentials(creds1)
> ugi.doAs(new PrivilegedExceptionAction[Void] {
>   // Get a copy of the credentials
>   override def run(): Void = {
> var i = 0
> while (true) {
>   val creds1 = new org.apache.hadoop.security.Credentials()
>   val ugi1 = 
> UserGroupInformation.loginUserFromKeytabAndReturnUGI(principal, keytab)
>   ugi1.doAs(new PrivilegedExceptionAction[Void] {
> // Get a copy of the credentials
> override def run(): Void = {
>   val fs = FileSystem.get(new Configuration())
>   fs.addDelegationTokens("test", creds1)
>   null
> }
>   })
>   UserGroupInformation.getCurrentUser.addCredentials(creds1)
>   val fs = FileSystem.get( new Configuration())
>   i += 1
>   println()
>   println(i)
>   println(fs.listFiles(new Path("/user"), false))
>   Thread.sleep(60 * 1000)
> }
> null
>   }
> })
>   }
> }
> {code}
> To reproduce the bug, please set the following configuration to Name Node:
> {code}
> dfs.namenode.delegation.token.max-lifetime = 10min
> dfs.namenode.delegation.key.update-interval = 3min
> dfs.namenode.delegation.token.renew-interval = 3min
> {code}
> The bug will occure after 3 minutes.
> The stacktrace is:
> {code}
> Exception in thread "main" 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
>  token (HDFS_DELEGATION_TOKEN token 330156 for test) is expired
>   at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>   at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)
>   

[jira] [Commented] (HDFS-10672) libhdfs++: reorder directories in src/main/libhdfspp/examples, and add C++ version of cat tool

2016-07-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398452#comment-15398452
 ] 

Hadoop QA commented on HDFS-10672:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 
28s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
29s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
35s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
45s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
38s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  5m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
46s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  5m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
7s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
8s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
51s{color} | {color:green} hadoop-hdfs-native-client in the patch passed with 
JDK v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0cf5e66 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820793/HDFS-10672.HDFS-8707.001.patch
 |
| JIRA Issue | HDFS-10672 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  javadoc  
mvninstall  |
| uname | Linux 3ab60deba966 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / ba6f2fe |
| Default Java | 1.7.0_101 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_101 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_101 |
| JDK v1.7.0_101  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16243/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 

[jira] [Updated] (HDFS-9276) Failed to Update HDFS Delegation Token for long running application in HA mode

2016-07-28 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9276:

Hadoop Flags: Reviewed

> Failed to Update HDFS Delegation Token for long running application in HA mode
> --
>
> Key: HDFS-9276
> URL: https://issues.apache.org/jira/browse/HDFS-9276
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs, ha, security
>Affects Versions: 2.7.1
>Reporter: Liangliang Gu
>Assignee: Liangliang Gu
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HDFS-9276.01.patch, HDFS-9276.02.patch, 
> HDFS-9276.03.patch, HDFS-9276.04.patch, HDFS-9276.05.patch, 
> HDFS-9276.06.patch, HDFS-9276.07.patch, HDFS-9276.08.patch, 
> HDFS-9276.09.patch, HDFS-9276.10.patch, HDFS-9276.11.patch, 
> HDFS-9276.12.patch, HDFS-9276.13.patch, HDFS-9276.14.patch, 
> HDFS-9276.15.patch, HDFS-9276.16.patch, HDFS-9276.17.patch, 
> HDFS-9276.18.patch, HDFS-9276.19.patch, HDFS-9276.20.patch, 
> HDFSReadLoop.scala, debug1.PNG, debug2.PNG
>
>
> The Scenario is as follows:
> 1. NameNode HA is enabled.
> 2. Kerberos is enabled.
> 3. HDFS Delegation Token (not Keytab or TGT) is used to communicate with 
> NameNode.
> 4. We want to update the HDFS Delegation Token for long running applicatons. 
> HDFS Client will generate private tokens for each NameNode. When we update 
> the HDFS Delegation Token, these private tokens will not be updated, which 
> will cause token expired.
> This bug can be reproduced by the following program:
> {code}
> import java.security.PrivilegedExceptionAction
> import org.apache.hadoop.conf.Configuration
> import org.apache.hadoop.fs.{FileSystem, Path}
> import org.apache.hadoop.security.UserGroupInformation
> object HadoopKerberosTest {
>   def main(args: Array[String]): Unit = {
> val keytab = "/path/to/keytab/xxx.keytab"
> val principal = "x...@abc.com"
> val creds1 = new org.apache.hadoop.security.Credentials()
> val ugi1 = 
> UserGroupInformation.loginUserFromKeytabAndReturnUGI(principal, keytab)
> ugi1.doAs(new PrivilegedExceptionAction[Void] {
>   // Get a copy of the credentials
>   override def run(): Void = {
> val fs = FileSystem.get(new Configuration())
> fs.addDelegationTokens("test", creds1)
> null
>   }
> })
> val ugi = UserGroupInformation.createRemoteUser("test")
> ugi.addCredentials(creds1)
> ugi.doAs(new PrivilegedExceptionAction[Void] {
>   // Get a copy of the credentials
>   override def run(): Void = {
> var i = 0
> while (true) {
>   val creds1 = new org.apache.hadoop.security.Credentials()
>   val ugi1 = 
> UserGroupInformation.loginUserFromKeytabAndReturnUGI(principal, keytab)
>   ugi1.doAs(new PrivilegedExceptionAction[Void] {
> // Get a copy of the credentials
> override def run(): Void = {
>   val fs = FileSystem.get(new Configuration())
>   fs.addDelegationTokens("test", creds1)
>   null
> }
>   })
>   UserGroupInformation.getCurrentUser.addCredentials(creds1)
>   val fs = FileSystem.get( new Configuration())
>   i += 1
>   println()
>   println(i)
>   println(fs.listFiles(new Path("/user"), false))
>   Thread.sleep(60 * 1000)
> }
> null
>   }
> })
>   }
> }
> {code}
> To reproduce the bug, please set the following configuration to Name Node:
> {code}
> dfs.namenode.delegation.token.max-lifetime = 10min
> dfs.namenode.delegation.key.update-interval = 3min
> dfs.namenode.delegation.token.renew-interval = 3min
> {code}
> The bug will occure after 3 minutes.
> The stacktrace is:
> {code}
> Exception in thread "main" 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
>  token (HDFS_DELEGATION_TOKEN token 330156 for test) is expired
>   at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>   at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:651)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)

[jira] [Updated] (HDFS-9276) Failed to Update HDFS Delegation Token for long running application in HA mode

2016-07-28 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9276:

   Resolution: Fixed
Fix Version/s: 3.0.0-alpha2
   2.9.0
   Status: Resolved  (was: Patch Available)

Failed tests looks unrelated, and passed locally. (TestHDFSCLI was recently 
fixed by HDFS-10696).

Committed to trunk and branch-2. Thanks [~marsishandsome], [~jzhuge] for 
working on this, and [~hitliuyi], [~ste...@apache.org] and [~vinayrpet] for the 
reviews, and everyone for the comments!

> Failed to Update HDFS Delegation Token for long running application in HA mode
> --
>
> Key: HDFS-9276
> URL: https://issues.apache.org/jira/browse/HDFS-9276
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs, ha, security
>Affects Versions: 2.7.1
>Reporter: Liangliang Gu
>Assignee: Liangliang Gu
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HDFS-9276.01.patch, HDFS-9276.02.patch, 
> HDFS-9276.03.patch, HDFS-9276.04.patch, HDFS-9276.05.patch, 
> HDFS-9276.06.patch, HDFS-9276.07.patch, HDFS-9276.08.patch, 
> HDFS-9276.09.patch, HDFS-9276.10.patch, HDFS-9276.11.patch, 
> HDFS-9276.12.patch, HDFS-9276.13.patch, HDFS-9276.14.patch, 
> HDFS-9276.15.patch, HDFS-9276.16.patch, HDFS-9276.17.patch, 
> HDFS-9276.18.patch, HDFS-9276.19.patch, HDFS-9276.20.patch, 
> HDFSReadLoop.scala, debug1.PNG, debug2.PNG
>
>
> The Scenario is as follows:
> 1. NameNode HA is enabled.
> 2. Kerberos is enabled.
> 3. HDFS Delegation Token (not Keytab or TGT) is used to communicate with 
> NameNode.
> 4. We want to update the HDFS Delegation Token for long running applicatons. 
> HDFS Client will generate private tokens for each NameNode. When we update 
> the HDFS Delegation Token, these private tokens will not be updated, which 
> will cause token expired.
> This bug can be reproduced by the following program:
> {code}
> import java.security.PrivilegedExceptionAction
> import org.apache.hadoop.conf.Configuration
> import org.apache.hadoop.fs.{FileSystem, Path}
> import org.apache.hadoop.security.UserGroupInformation
> object HadoopKerberosTest {
>   def main(args: Array[String]): Unit = {
> val keytab = "/path/to/keytab/xxx.keytab"
> val principal = "x...@abc.com"
> val creds1 = new org.apache.hadoop.security.Credentials()
> val ugi1 = 
> UserGroupInformation.loginUserFromKeytabAndReturnUGI(principal, keytab)
> ugi1.doAs(new PrivilegedExceptionAction[Void] {
>   // Get a copy of the credentials
>   override def run(): Void = {
> val fs = FileSystem.get(new Configuration())
> fs.addDelegationTokens("test", creds1)
> null
>   }
> })
> val ugi = UserGroupInformation.createRemoteUser("test")
> ugi.addCredentials(creds1)
> ugi.doAs(new PrivilegedExceptionAction[Void] {
>   // Get a copy of the credentials
>   override def run(): Void = {
> var i = 0
> while (true) {
>   val creds1 = new org.apache.hadoop.security.Credentials()
>   val ugi1 = 
> UserGroupInformation.loginUserFromKeytabAndReturnUGI(principal, keytab)
>   ugi1.doAs(new PrivilegedExceptionAction[Void] {
> // Get a copy of the credentials
> override def run(): Void = {
>   val fs = FileSystem.get(new Configuration())
>   fs.addDelegationTokens("test", creds1)
>   null
> }
>   })
>   UserGroupInformation.getCurrentUser.addCredentials(creds1)
>   val fs = FileSystem.get( new Configuration())
>   i += 1
>   println()
>   println(i)
>   println(fs.listFiles(new Path("/user"), false))
>   Thread.sleep(60 * 1000)
> }
> null
>   }
> })
>   }
> }
> {code}
> To reproduce the bug, please set the following configuration to Name Node:
> {code}
> dfs.namenode.delegation.token.max-lifetime = 10min
> dfs.namenode.delegation.key.update-interval = 3min
> dfs.namenode.delegation.token.renew-interval = 3min
> {code}
> The bug will occure after 3 minutes.
> The stacktrace is:
> {code}
> Exception in thread "main" 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
>  token (HDFS_DELEGATION_TOKEN token 330156 for test) is expired
>   at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>   at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:651)
>   at 

[jira] [Commented] (HDFS-10676) Add namenode metric to measure time spent in generating EDEKs

2016-07-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398435#comment-15398435
 ] 

Hudson commented on HDFS-10676:
---

SUCCESS: Integrated in Hadoop-trunk-Commit #10173 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10173/])
HDFS-10676. Add namenode metric to measure time spent in generating (xyao: rev 
ce3d68e9c3512b6f370e7597c411560d8a61052d)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirEncryptionZoneOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/metrics/TestNameNodeMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/metrics/NameNodeMetrics.java


> Add namenode metric to measure time spent in generating EDEKs
> -
>
> Key: HDFS-10676
> URL: https://issues.apache.org/jira/browse/HDFS-10676
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>  Labels: metrics, namenode
> Fix For: 2.8.0
>
> Attachments: HDFS-10676.000.patch, HDFS-10676.001.patch, 
> HDFS-10676.002.patch, HDFS-10676.003.patch, HDFS-10676.004.patch, 
> HDFS-10676.005.patch, HDFS-10676.006.patch
>
>
> A metric to measure the time spent by Namenode in interacting with Key 
> Management System (KMS).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10690) Optimize insertion/removal of replica in ShortCircuitCache.java

2016-07-28 Thread Fenghua Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398429#comment-15398429
 ] 

Fenghua Hu commented on HDFS-10690:
---

Xiaoyu, thanks for the suggestion. LinkedHashMap is another good choice for 
performance. I considered it before but didn't implement thus had no 
performance. LinkedHashMap needs to do two things: 1. calculate key and insert 
hashmap, 2, insert into a linked list, hence in theory, it does more things 
than LruList in this patch. Yes, LinkedHashMap does make code cleaner, but its 
performances could be compromised. Please correct me if i am wrong.

> Optimize insertion/removal of replica in ShortCircuitCache.java
> ---
>
> Key: HDFS-10690
> URL: https://issues.apache.org/jira/browse/HDFS-10690
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.0.0-alpha2
>Reporter: Fenghua Hu
>Assignee: Fenghua Hu
> Attachments: HDFS-10690.001.patch
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Currently in ShortCircuitCache, two TreeMap objects are used to track the 
> cached replicas.
> private final TreeMap evictable = new TreeMap<>();
> private final TreeMap evictableMmapped = new 
> TreeMap<>();
> TreeMap employs Red-Black tree for sorting. This isn't an issue when using 
> traditional HDD. But when using high-performance SSD/PCIe Flash, the cost 
> inserting/removing an entry  becomes considerable.
> To mitigate it, we designed a new list-based for replica tracking.
> The list is a double-linked FIFO. FIFO is time-based, thus insertion is a 
> very low cost operation. On the other hand, list is not lookup-friendly. To 
> address this issue, we introduce two references into ShortCircuitReplica 
> object.
> ShortCircuitReplica next = null;
> ShortCircuitReplica prev = null;
> In this way, lookup is not needed when removing a replica from the list. We 
> only need to modify its predecessor's and successor's references in the lists.
> Our tests showed up to 15-50% performance improvement when using PCIe flash 
> as storage media.
> The original patch is against 2.6.4, now I am porting to Hadoop trunk, and 
> patch will be posted soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10689) Hdfs dfs chmod should reset sticky bit permission when the bit is omitted in the octal mode

2016-07-28 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-10689:
--
Release Note: Hdfs dfs chmod command will reset sticky bit permission on a 
file/directory when the leading sticky bit is omitted in the octal mode (like 
644). So when a file/directory permission is applied using octal mode and 
sticky bit permission needs to be preserved, then it has to be explicitly 
mentioned in the permission bits (like 1644). This behavior is similar to many 
other filesystems on Linux/BSD.

> Hdfs dfs chmod should reset sticky bit permission when the bit is omitted in 
> the octal mode
> ---
>
> Key: HDFS-10689
> URL: https://issues.apache.org/jira/browse/HDFS-10689
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.4
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>Priority: Minor
> Fix For: 3.0.0-alpha1
>
> Attachments: HDFS-10689.001.patch, HDFS-10689.002.patch, 
> HDFS-10689.003.patch, HDFS-10689.004.patch
>
>
> When a directory permission is modified using hdfs dfs chmod command and when 
> octal/numeric format is used, the leading sticky bit is not fully honored.
> 1. Create a dir dir_test_with_sticky_bit
> 2. Apply sticky bit permission on the dir : hdfs dfs -chmod 1755 
> /dir_test_with_sticky_bit
> 3. Remove sticky bit permission on the dir: hdfs dfs -chmod 755 
> /dir_test_with_sticky_bit
> Expected: Remove the sticky bit on the dir, as it happens on Mac/Linux native 
> filesystem with native chmod.
> 4. However, removing sticky bit permission by explicitly turning off the bit 
> works. hdfs dfs -chmod 0755 /dir_test_with_sticky_bit
> {noformat}
> manoj@~/work/hadev-pp: hdfs dfs -chmod 1755 /dir_test_with_sticky_bit
> manoj@~/work/hadev-pp: hdfs dfs -ls /
> Found 2 items
> drwxr-xr-t   - manoj supergroup  0 2016-07-25 11:42 
> /dir_test_with_sticky_bit
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
> manoj@~/work/hadev-pp: hdfs dfs -chmod 755 /dir_test_with_sticky_bit
> manoj@~/work/hadev-pp: hdfs dfs -ls /
> Found 2 items
> drwxr-xr-t   - manoj supergroup  0 2016-07-25 11:42 
> /dir_test_with_sticky_bit  <=== sticky bit still intact
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
> manoj@~/work/hadev-pp: hdfs dfs -chmod 0755 /dir_test_with_sticky_bit
> manoj@~/work/hadev-pp: hdfs dfs -ls /
> Found 2 items
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 
> /dir_test_with_sticky_bit
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
> manoj@~/work/hadev-pp: 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10702) Add a Client API and Proxy Provider to enable stale read from Standby

2016-07-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398419#comment-15398419
 ] 

Hadoop QA commented on HDFS-10702:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-10702 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820830/HDFS-10702.001.patch |
| JIRA Issue | HDFS-10702 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16246/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add a Client API and Proxy Provider to enable stale read from Standby
> -
>
> Key: HDFS-10702
> URL: https://issues.apache.org/jira/browse/HDFS-10702
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jiayi Zhou
>Assignee: Jiayi Zhou
>Priority: Minor
> Attachments: HDFS-10702.001.patch, StaleReadfromStandbyNN.pdf
>
>
> Currently, clients must always talk to the active NameNode when performing 
> any metadata operation, which means active NameNode could be a bottleneck for 
> scalability. One way to solve this problem is to send read-only operations to 
> Standby NameNode. The disadvantage is that it might be a stale read. 
> Here, I'm thinking of adding a Client API to enable/disable stale read from 
> Standby which gives Client the power to set the staleness restriction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10702) Add a Client API and Proxy Provider to enable stale read from Standby

2016-07-28 Thread Jiayi Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiayi Zhou updated HDFS-10702:
--
Attachment: HDFS-10702.001.patch

> Add a Client API and Proxy Provider to enable stale read from Standby
> -
>
> Key: HDFS-10702
> URL: https://issues.apache.org/jira/browse/HDFS-10702
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jiayi Zhou
>Assignee: Jiayi Zhou
>Priority: Minor
> Attachments: HDFS-10702.001.patch, StaleReadfromStandbyNN.pdf
>
>
> Currently, clients must always talk to the active NameNode when performing 
> any metadata operation, which means active NameNode could be a bottleneck for 
> scalability. One way to solve this problem is to send read-only operations to 
> Standby NameNode. The disadvantage is that it might be a stale read. 
> Here, I'm thinking of adding a Client API to enable/disable stale read from 
> Standby which gives Client the power to set the staleness restriction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10702) Add a Client API and Proxy Provider to enable stale read from Standby

2016-07-28 Thread Jiayi Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiayi Zhou updated HDFS-10702:
--
Attachment: (was: HDFS-10702.001.patch)

> Add a Client API and Proxy Provider to enable stale read from Standby
> -
>
> Key: HDFS-10702
> URL: https://issues.apache.org/jira/browse/HDFS-10702
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jiayi Zhou
>Assignee: Jiayi Zhou
>Priority: Minor
> Attachments: HDFS-10702.001.patch, StaleReadfromStandbyNN.pdf
>
>
> Currently, clients must always talk to the active NameNode when performing 
> any metadata operation, which means active NameNode could be a bottleneck for 
> scalability. One way to solve this problem is to send read-only operations to 
> Standby NameNode. The disadvantage is that it might be a stale read. 
> Here, I'm thinking of adding a Client API to enable/disable stale read from 
> Standby which gives Client the power to set the staleness restriction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10702) Add a Client API and Proxy Provider to enable stale read from Standby

2016-07-28 Thread Jiayi Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiayi Zhou updated HDFS-10702:
--
Attachment: StaleReadfromStandbyNN.pdf

> Add a Client API and Proxy Provider to enable stale read from Standby
> -
>
> Key: HDFS-10702
> URL: https://issues.apache.org/jira/browse/HDFS-10702
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jiayi Zhou
>Assignee: Jiayi Zhou
>Priority: Minor
> Attachments: HDFS-10702.001.patch, StaleReadfromStandbyNN.pdf
>
>
> Currently, clients must always talk to the active NameNode when performing 
> any metadata operation, which means active NameNode could be a bottleneck for 
> scalability. One way to solve this problem is to send read-only operations to 
> Standby NameNode. The disadvantage is that it might be a stale read. 
> Here, I'm thinking of adding a Client API to enable/disable stale read from 
> Standby which gives Client the power to set the staleness restriction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10676) Add namenode metric to measure time spent in generating EDEKs

2016-07-28 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398411#comment-15398411
 ] 

Xiaoyu Yao commented on HDFS-10676:
---

Thank you [~arpiagariu] for cherry-picking to branch-2.8. 

> Add namenode metric to measure time spent in generating EDEKs
> -
>
> Key: HDFS-10676
> URL: https://issues.apache.org/jira/browse/HDFS-10676
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>  Labels: metrics, namenode
> Fix For: 2.8.0
>
> Attachments: HDFS-10676.000.patch, HDFS-10676.001.patch, 
> HDFS-10676.002.patch, HDFS-10676.003.patch, HDFS-10676.004.patch, 
> HDFS-10676.005.patch, HDFS-10676.006.patch
>
>
> A metric to measure the time spent by Namenode in interacting with Key 
> Management System (KMS).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10702) Add a Client API and Proxy Provider to enable stale read from Standby

2016-07-28 Thread Jiayi Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiayi Zhou updated HDFS-10702:
--
Attachment: HDFS-10702.001.patch

> Add a Client API and Proxy Provider to enable stale read from Standby
> -
>
> Key: HDFS-10702
> URL: https://issues.apache.org/jira/browse/HDFS-10702
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jiayi Zhou
>Assignee: Jiayi Zhou
>Priority: Minor
> Attachments: HDFS-10702.001.patch
>
>
> Currently, clients must always talk to the active NameNode when performing 
> any metadata operation, which means active NameNode could be a bottleneck for 
> scalability. One way to solve this problem is to send read-only operations to 
> Standby NameNode. The disadvantage is that it might be a stale read. 
> Here, I'm thinking of adding a Client API to enable/disable stale read from 
> Standby which gives Client the power to set the staleness restriction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10702) Add a Client API and Proxy Provider to enable stale read from Standby

2016-07-28 Thread Jiayi Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiayi Zhou updated HDFS-10702:
--
Attachment: (was: StaleReadfromStandbyNN.pdf)

> Add a Client API and Proxy Provider to enable stale read from Standby
> -
>
> Key: HDFS-10702
> URL: https://issues.apache.org/jira/browse/HDFS-10702
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jiayi Zhou
>Assignee: Jiayi Zhou
>Priority: Minor
>
> Currently, clients must always talk to the active NameNode when performing 
> any metadata operation, which means active NameNode could be a bottleneck for 
> scalability. One way to solve this problem is to send read-only operations to 
> Standby NameNode. The disadvantage is that it might be a stale read. 
> Here, I'm thinking of adding a Client API to enable/disable stale read from 
> Standby which gives Client the power to set the staleness restriction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10702) Add a Client API and Proxy Provider to enable stale read from Standby

2016-07-28 Thread Jiayi Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiayi Zhou updated HDFS-10702:
--
Attachment: (was: HDFS-10702.001.patch)

> Add a Client API and Proxy Provider to enable stale read from Standby
> -
>
> Key: HDFS-10702
> URL: https://issues.apache.org/jira/browse/HDFS-10702
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jiayi Zhou
>Assignee: Jiayi Zhou
>Priority: Minor
>
> Currently, clients must always talk to the active NameNode when performing 
> any metadata operation, which means active NameNode could be a bottleneck for 
> scalability. One way to solve this problem is to send read-only operations to 
> Standby NameNode. The disadvantage is that it might be a stale read. 
> Here, I'm thinking of adding a Client API to enable/disable stale read from 
> Standby which gives Client the power to set the staleness restriction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10676) Add namenode metric to measure time spent in generating EDEKs

2016-07-28 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398408#comment-15398408
 ] 

Arpit Agarwal commented on HDFS-10676:
--

Also cherry-picked through to branch-2.8.

> Add namenode metric to measure time spent in generating EDEKs
> -
>
> Key: HDFS-10676
> URL: https://issues.apache.org/jira/browse/HDFS-10676
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>  Labels: metrics, namenode
> Fix For: 2.8.0
>
> Attachments: HDFS-10676.000.patch, HDFS-10676.001.patch, 
> HDFS-10676.002.patch, HDFS-10676.003.patch, HDFS-10676.004.patch, 
> HDFS-10676.005.patch, HDFS-10676.006.patch
>
>
> A metric to measure the time spent by Namenode in interacting with Key 
> Management System (KMS).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10676) Add namenode metric to measure time spent in generating EDEKs

2016-07-28 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-10676:
-
Affects Version/s: (was: 3.0.0-alpha1)

> Add namenode metric to measure time spent in generating EDEKs
> -
>
> Key: HDFS-10676
> URL: https://issues.apache.org/jira/browse/HDFS-10676
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>  Labels: metrics, namenode
> Fix For: 2.8.0
>
> Attachments: HDFS-10676.000.patch, HDFS-10676.001.patch, 
> HDFS-10676.002.patch, HDFS-10676.003.patch, HDFS-10676.004.patch, 
> HDFS-10676.005.patch, HDFS-10676.006.patch
>
>
> A metric to measure the time spent by Namenode in interacting with Key 
> Management System (KMS).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10676) Add namenode metric to measure time spent in generating EDEKs

2016-07-28 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-10676:
-
Fix Version/s: (was: 3.0.0-alpha1)
   2.8.0

> Add namenode metric to measure time spent in generating EDEKs
> -
>
> Key: HDFS-10676
> URL: https://issues.apache.org/jira/browse/HDFS-10676
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>  Labels: metrics, namenode
> Fix For: 2.8.0
>
> Attachments: HDFS-10676.000.patch, HDFS-10676.001.patch, 
> HDFS-10676.002.patch, HDFS-10676.003.patch, HDFS-10676.004.patch, 
> HDFS-10676.005.patch, HDFS-10676.006.patch
>
>
> A metric to measure the time spent by Namenode in interacting with Key 
> Management System (KMS).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9276) Failed to Update HDFS Delegation Token for long running application in HA mode

2016-07-28 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398394#comment-15398394
 ] 

Xiao Chen commented on HDFS-9276:
-

+1 on patch 20, will commit shortly.

> Failed to Update HDFS Delegation Token for long running application in HA mode
> --
>
> Key: HDFS-9276
> URL: https://issues.apache.org/jira/browse/HDFS-9276
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs, ha, security
>Affects Versions: 2.7.1
>Reporter: Liangliang Gu
>Assignee: Liangliang Gu
> Attachments: HDFS-9276.01.patch, HDFS-9276.02.patch, 
> HDFS-9276.03.patch, HDFS-9276.04.patch, HDFS-9276.05.patch, 
> HDFS-9276.06.patch, HDFS-9276.07.patch, HDFS-9276.08.patch, 
> HDFS-9276.09.patch, HDFS-9276.10.patch, HDFS-9276.11.patch, 
> HDFS-9276.12.patch, HDFS-9276.13.patch, HDFS-9276.14.patch, 
> HDFS-9276.15.patch, HDFS-9276.16.patch, HDFS-9276.17.patch, 
> HDFS-9276.18.patch, HDFS-9276.19.patch, HDFS-9276.20.patch, 
> HDFSReadLoop.scala, debug1.PNG, debug2.PNG
>
>
> The Scenario is as follows:
> 1. NameNode HA is enabled.
> 2. Kerberos is enabled.
> 3. HDFS Delegation Token (not Keytab or TGT) is used to communicate with 
> NameNode.
> 4. We want to update the HDFS Delegation Token for long running applicatons. 
> HDFS Client will generate private tokens for each NameNode. When we update 
> the HDFS Delegation Token, these private tokens will not be updated, which 
> will cause token expired.
> This bug can be reproduced by the following program:
> {code}
> import java.security.PrivilegedExceptionAction
> import org.apache.hadoop.conf.Configuration
> import org.apache.hadoop.fs.{FileSystem, Path}
> import org.apache.hadoop.security.UserGroupInformation
> object HadoopKerberosTest {
>   def main(args: Array[String]): Unit = {
> val keytab = "/path/to/keytab/xxx.keytab"
> val principal = "x...@abc.com"
> val creds1 = new org.apache.hadoop.security.Credentials()
> val ugi1 = 
> UserGroupInformation.loginUserFromKeytabAndReturnUGI(principal, keytab)
> ugi1.doAs(new PrivilegedExceptionAction[Void] {
>   // Get a copy of the credentials
>   override def run(): Void = {
> val fs = FileSystem.get(new Configuration())
> fs.addDelegationTokens("test", creds1)
> null
>   }
> })
> val ugi = UserGroupInformation.createRemoteUser("test")
> ugi.addCredentials(creds1)
> ugi.doAs(new PrivilegedExceptionAction[Void] {
>   // Get a copy of the credentials
>   override def run(): Void = {
> var i = 0
> while (true) {
>   val creds1 = new org.apache.hadoop.security.Credentials()
>   val ugi1 = 
> UserGroupInformation.loginUserFromKeytabAndReturnUGI(principal, keytab)
>   ugi1.doAs(new PrivilegedExceptionAction[Void] {
> // Get a copy of the credentials
> override def run(): Void = {
>   val fs = FileSystem.get(new Configuration())
>   fs.addDelegationTokens("test", creds1)
>   null
> }
>   })
>   UserGroupInformation.getCurrentUser.addCredentials(creds1)
>   val fs = FileSystem.get( new Configuration())
>   i += 1
>   println()
>   println(i)
>   println(fs.listFiles(new Path("/user"), false))
>   Thread.sleep(60 * 1000)
> }
> null
>   }
> })
>   }
> }
> {code}
> To reproduce the bug, please set the following configuration to Name Node:
> {code}
> dfs.namenode.delegation.token.max-lifetime = 10min
> dfs.namenode.delegation.key.update-interval = 3min
> dfs.namenode.delegation.token.renew-interval = 3min
> {code}
> The bug will occure after 3 minutes.
> The stacktrace is:
> {code}
> Exception in thread "main" 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
>  token (HDFS_DELEGATION_TOKEN token 330156 for test) is expired
>   at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>   at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:651)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> 

[jira] [Commented] (HDFS-10676) Add namenode metric to measure time spent in generating EDEKs

2016-07-28 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398392#comment-15398392
 ] 

Hanisha Koneru commented on HDFS-10676:
---

Thank you [~xyao] and [~arpitagarwal] for reviewing and committing the patch.

> Add namenode metric to measure time spent in generating EDEKs
> -
>
> Key: HDFS-10676
> URL: https://issues.apache.org/jira/browse/HDFS-10676
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>  Labels: metrics, namenode
> Fix For: 3.0.0-alpha1
>
> Attachments: HDFS-10676.000.patch, HDFS-10676.001.patch, 
> HDFS-10676.002.patch, HDFS-10676.003.patch, HDFS-10676.004.patch, 
> HDFS-10676.005.patch, HDFS-10676.006.patch
>
>
> A metric to measure the time spent by Namenode in interacting with Key 
> Management System (KMS).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10676) Add namenode metric to measure time spent in generating EDEKs

2016-07-28 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-10676:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha1
 Tags: metrics
   Status: Resolved  (was: Patch Available)

Thanks [~hkoneru] for the contribution and [~arpiagariu] for the review. I've 
commit the fix to trunk and branch-2. 

> Add namenode metric to measure time spent in generating EDEKs
> -
>
> Key: HDFS-10676
> URL: https://issues.apache.org/jira/browse/HDFS-10676
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>  Labels: metrics, namenode
> Fix For: 3.0.0-alpha1
>
> Attachments: HDFS-10676.000.patch, HDFS-10676.001.patch, 
> HDFS-10676.002.patch, HDFS-10676.003.patch, HDFS-10676.004.patch, 
> HDFS-10676.005.patch, HDFS-10676.006.patch
>
>
> A metric to measure the time spent by Namenode in interacting with Key 
> Management System (KMS).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10689) Hdfs dfs chmod should reset sticky bit permission when the bit is omitted in the octal mode

2016-07-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398377#comment-15398377
 ] 

Hudson commented on HDFS-10689:
---

SUCCESS: Integrated in Hadoop-trunk-Commit #10172 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10172/])
HDFS-10689.  Hdfs dfs chmod should reset sticky bit permission when the (lei: 
rev a3d0cba81071977ffc22a42f60b2ca07da25ae15)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/permission/TestStickyBit.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/PermissionParser.java


> Hdfs dfs chmod should reset sticky bit permission when the bit is omitted in 
> the octal mode
> ---
>
> Key: HDFS-10689
> URL: https://issues.apache.org/jira/browse/HDFS-10689
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.4
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>Priority: Minor
> Fix For: 3.0.0-alpha1
>
> Attachments: HDFS-10689.001.patch, HDFS-10689.002.patch, 
> HDFS-10689.003.patch, HDFS-10689.004.patch
>
>
> When a directory permission is modified using hdfs dfs chmod command and when 
> octal/numeric format is used, the leading sticky bit is not fully honored.
> 1. Create a dir dir_test_with_sticky_bit
> 2. Apply sticky bit permission on the dir : hdfs dfs -chmod 1755 
> /dir_test_with_sticky_bit
> 3. Remove sticky bit permission on the dir: hdfs dfs -chmod 755 
> /dir_test_with_sticky_bit
> Expected: Remove the sticky bit on the dir, as it happens on Mac/Linux native 
> filesystem with native chmod.
> 4. However, removing sticky bit permission by explicitly turning off the bit 
> works. hdfs dfs -chmod 0755 /dir_test_with_sticky_bit
> {noformat}
> manoj@~/work/hadev-pp: hdfs dfs -chmod 1755 /dir_test_with_sticky_bit
> manoj@~/work/hadev-pp: hdfs dfs -ls /
> Found 2 items
> drwxr-xr-t   - manoj supergroup  0 2016-07-25 11:42 
> /dir_test_with_sticky_bit
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
> manoj@~/work/hadev-pp: hdfs dfs -chmod 755 /dir_test_with_sticky_bit
> manoj@~/work/hadev-pp: hdfs dfs -ls /
> Found 2 items
> drwxr-xr-t   - manoj supergroup  0 2016-07-25 11:42 
> /dir_test_with_sticky_bit  <=== sticky bit still intact
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
> manoj@~/work/hadev-pp: hdfs dfs -chmod 0755 /dir_test_with_sticky_bit
> manoj@~/work/hadev-pp: hdfs dfs -ls /
> Found 2 items
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 
> /dir_test_with_sticky_bit
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
> manoj@~/work/hadev-pp: 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-4176) EditLogTailer should call rollEdits with a timeout

2016-07-28 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398370#comment-15398370
 ] 

Jing Zhao commented on HDFS-4176:
-

Thanks [~eddyxu]. The branch-2 patch also looks good to me. Maybe in branch-2's 
context, {{getNameNodeProxy}} can be renamed? Other than this +1.

> EditLogTailer should call rollEdits with a timeout
> --
>
> Key: HDFS-4176
> URL: https://issues.apache.org/jira/browse/HDFS-4176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, namenode
>Affects Versions: 2.0.2-alpha, 3.0.0-alpha1
>Reporter: Todd Lipcon
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-4176-branch-2.0.patch, HDFS-4176.00.patch, 
> HDFS-4176.01.patch, HDFS-4176.02.patch, HDFS-4176.03.patch, 
> HDFS-4176.04.patch, namenode.jstack4
>
>
> When the EditLogTailer thread calls rollEdits() on the active NN via RPC, it 
> currently does so without a timeout. So, if the active NN has frozen (but not 
> actually crashed), this call can hang forever. This can then potentially 
> prevent the standby from becoming active.
> This may actually considered a side effect of HADOOP-6762 -- if the RPC were 
> interruptible, that would also fix the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10702) Add a Client API and Proxy Provider to enable stale read from Standby

2016-07-28 Thread Jiayi Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiayi Zhou updated HDFS-10702:
--
Status: Patch Available  (was: Open)

> Add a Client API and Proxy Provider to enable stale read from Standby
> -
>
> Key: HDFS-10702
> URL: https://issues.apache.org/jira/browse/HDFS-10702
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jiayi Zhou
>Assignee: Jiayi Zhou
>Priority: Minor
> Attachments: HDFS-10702.001.patch, StaleReadfromStandbyNN.pdf
>
>
> Currently, clients must always talk to the active NameNode when performing 
> any metadata operation, which means active NameNode could be a bottleneck for 
> scalability. One way to solve this problem is to send read-only operations to 
> Standby NameNode. The disadvantage is that it might be a stale read. 
> Here, I'm thinking of adding a Client API to enable/disable stale read from 
> Standby which gives Client the power to set the staleness restriction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-10702) Add a Client API and Proxy Provider to enable stale read from Standby

2016-07-28 Thread Jiayi Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiayi Zhou updated HDFS-10702:
--
Comment: was deleted

(was: This JIRA gives us a more up-to-date data on Standby NN.)

> Add a Client API and Proxy Provider to enable stale read from Standby
> -
>
> Key: HDFS-10702
> URL: https://issues.apache.org/jira/browse/HDFS-10702
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jiayi Zhou
>Assignee: Jiayi Zhou
>Priority: Minor
> Attachments: HDFS-10702.001.patch, StaleReadfromStandbyNN.pdf
>
>
> Currently, clients must always talk to the active NameNode when performing 
> any metadata operation, which means active NameNode could be a bottleneck for 
> scalability. One way to solve this problem is to send read-only operations to 
> Standby NameNode. The disadvantage is that it might be a stale read. 
> Here, I'm thinking of adding a Client API to enable/disable stale read from 
> Standby which gives Client the power to set the staleness restriction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10702) Add a Client API and Proxy Provider to enable stale read from Standby

2016-07-28 Thread Jiayi Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiayi Zhou updated HDFS-10702:
--
Attachment: StaleReadfromStandbyNN.pdf
HDFS-10702.001.patch

Upload the first patch and a design scope.

> Add a Client API and Proxy Provider to enable stale read from Standby
> -
>
> Key: HDFS-10702
> URL: https://issues.apache.org/jira/browse/HDFS-10702
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jiayi Zhou
>Assignee: Jiayi Zhou
>Priority: Minor
> Attachments: HDFS-10702.001.patch, StaleReadfromStandbyNN.pdf
>
>
> Currently, clients must always talk to the active NameNode when performing 
> any metadata operation, which means active NameNode could be a bottleneck for 
> scalability. One way to solve this problem is to send read-only operations to 
> Standby NameNode. The disadvantage is that it might be a stale read. 
> Here, I'm thinking of adding a Client API to enable/disable stale read from 
> Standby which gives Client the power to set the staleness restriction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10673) Optimize FSPermissionChecker's internal path usage

2016-07-28 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398348#comment-15398348
 ] 

Jing Zhao commented on HDFS-10673:
--

So what's your patch's behavior? When doing a subaccess check on "/a/b", your 
patch uses provider to get attrs for ["/","a","b"], and then fall back to use 
default way ({{getSnapshotINode}}) to get attributes for c and d. And you think 
this is a correct way?

> Optimize FSPermissionChecker's internal path usage
> --
>
> Key: HDFS-10673
> URL: https://issues.apache.org/jira/browse/HDFS-10673
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-10673.patch
>
>
> The INodeAttributeProvider and AccessControlEnforcer features degrade 
> performance and generate excessive garbage even when neither is used.  Main 
> issues:
> # A byte[][] of components is unnecessarily created.  Each path component 
> lookup converts a subrange of the byte[][] to a new String[] - then not used 
> by default attribute provider.
> # Subaccess checks are insanely expensive.  The full path of every subdir is 
> created by walking up the inode tree, creating a INode[], building a string 
> by converting each inode's byte[] name to a string, etc.  Which will only be 
> used if there's an exception.
> The expensive of #1 should only be incurred when using the provider/enforcer 
> feature.  For #2, paths should be created on-demand for exceptions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10702) Add a Client API and Proxy Provider to enable stale read from Standby

2016-07-28 Thread Jiayi Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398345#comment-15398345
 ] 

Jiayi Zhou commented on HDFS-10702:
---

This JIRA gives us a more up-to-date data on Standby NN.

> Add a Client API and Proxy Provider to enable stale read from Standby
> -
>
> Key: HDFS-10702
> URL: https://issues.apache.org/jira/browse/HDFS-10702
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jiayi Zhou
>Assignee: Jiayi Zhou
>Priority: Minor
>
> Currently, clients must always talk to the active NameNode when performing 
> any metadata operation, which means active NameNode could be a bottleneck for 
> scalability. One way to solve this problem is to send read-only operations to 
> Standby NameNode. The disadvantage is that it might be a stale read. 
> Here, I'm thinking of adding a Client API to enable/disable stale read from 
> Standby which gives Client the power to set the staleness restriction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10702) Add a Client API and Proxy Provider to enable stale read from Standby

2016-07-28 Thread Jiayi Zhou (JIRA)
Jiayi Zhou created HDFS-10702:
-

 Summary: Add a Client API and Proxy Provider to enable stale read 
from Standby
 Key: HDFS-10702
 URL: https://issues.apache.org/jira/browse/HDFS-10702
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Jiayi Zhou
Assignee: Jiayi Zhou
Priority: Minor


Currently, clients must always talk to the active NameNode when performing any 
metadata operation, which means active NameNode could be a bottleneck for 
scalability. One way to solve this problem is to send read-only operations to 
Standby NameNode. The disadvantage is that it might be a stale read. 

Here, I'm thinking of adding a Client API to enable/disable stale read from 
Standby which gives Client the power to set the staleness restriction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10673) Optimize FSPermissionChecker's internal path usage

2016-07-28 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398318#comment-15398318
 ] 

Daryn Sharp commented on HDFS-10673:


Ok, thanks.  Do you want it to work correctly or be as broken as it is today?  
Currently the provider is called with the path elements for a directory, and 
that directory's inode attrs.  Then the provider is called with the same 
original path elements for every descendent of said directory.   Ex. do a 
subaccess check on "/a/b".  Provider is called with ["","a","b"] and b's inode 
attrs.  Provider is called with ["","a","b"] even when passing inode attrs for 
"/a/b/c/d".

> Optimize FSPermissionChecker's internal path usage
> --
>
> Key: HDFS-10673
> URL: https://issues.apache.org/jira/browse/HDFS-10673
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-10673.patch
>
>
> The INodeAttributeProvider and AccessControlEnforcer features degrade 
> performance and generate excessive garbage even when neither is used.  Main 
> issues:
> # A byte[][] of components is unnecessarily created.  Each path component 
> lookup converts a subrange of the byte[][] to a new String[] - then not used 
> by default attribute provider.
> # Subaccess checks are insanely expensive.  The full path of every subdir is 
> created by walking up the inode tree, creating a INode[], building a string 
> by converting each inode's byte[] name to a string, etc.  Which will only be 
> used if there's an exception.
> The expensive of #1 should only be incurred when using the provider/enforcer 
> feature.  For #2, paths should be created on-demand for exceptions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6962) ACLs inheritance conflict with umaskmode

2016-07-28 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398284#comment-15398284
 ] 

Lei (Eddy) Xu commented on HDFS-6962:
-

The {{-09}} patch LGTM.  +1. 

> ACLs inheritance conflict with umaskmode
> 
>
> Key: HDFS-6962
> URL: https://issues.apache.org/jira/browse/HDFS-6962
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.1
> Environment: CentOS release 6.5 (Final)
>Reporter: LINTE
>Assignee: John Zhuge
>Priority: Critical
>  Labels: hadoop, security
> Attachments: HDFS-6962.001.patch, HDFS-6962.002.patch, 
> HDFS-6962.003.patch, HDFS-6962.004.patch, HDFS-6962.005.patch, 
> HDFS-6962.006.patch, HDFS-6962.007.patch, HDFS-6962.008.patch, 
> HDFS-6962.009.patch, HDFS-6962.1.patch, disabled_new_client.log, 
> disabled_old_client.log, enabled_new_client.log, enabled_old_client.log, run
>
>
> In hdfs-site.xml 
> 
> dfs.umaskmode
> 027
> 
> 1/ Create a directory as superuser
> bash# hdfs dfs -mkdir  /tmp/ACLS
> 2/ set default ACLs on this directory rwx access for group readwrite and user 
> toto
> bash# hdfs dfs -setfacl -m default:group:readwrite:rwx /tmp/ACLS
> bash# hdfs dfs -setfacl -m default:user:toto:rwx /tmp/ACLS
> 3/ check ACLs /tmp/ACLS/
> bash# hdfs dfs -getfacl /tmp/ACLS/
> # file: /tmp/ACLS
> # owner: hdfs
> # group: hadoop
> user::rwx
> group::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> user::rwx | group::r-x | other::--- matches with the umaskmode defined in 
> hdfs-site.xml, everything ok !
> default:group:readwrite:rwx allow readwrite group with rwx access for 
> inhéritance.
> default:user:toto:rwx allow toto user with rwx access for inhéritance.
> default:mask::rwx inhéritance mask is rwx, so no mask
> 4/ Create a subdir to test inheritance of ACL
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs
> 5/ check ACLs /tmp/ACLS/hdfs
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs
> # file: /tmp/ACLS/hdfs
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:r-x
> group::r-x
> group:readwrite:rwx #effective:r-x
> mask::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> Here we can see that the readwrite group has rwx ACL bu only r-x is effective 
> because the mask is r-x (mask::r-x) in spite of default mask for inheritance 
> is set to default:mask::rwx on /tmp/ACLS/
> 6/ Modifiy hdfs-site.xml et restart namenode
> 
> dfs.umaskmode
> 010
> 
> 7/ Create a subdir to test inheritance of ACL with new parameter umaskmode
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs2
> 8/ Check ACL on /tmp/ACLS/hdfs2
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs2
> # file: /tmp/ACLS/hdfs2
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:rw-
> group::r-x  #effective:r--
> group:readwrite:rwx #effective:rw-
> mask::rw-
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> So HDFS masks the ACL value (user, group and other  -- exepted the POSIX 
> owner -- ) with the group mask of dfs.umaskmode properties when creating 
> directory with inherited ACL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10684) WebHDFS DataNode calls fail when boolean parameters not provided

2016-07-28 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-10684:
--
Summary: WebHDFS DataNode calls fail when boolean parameters not provided  
(was: WebHDFS calls fail when boolean parameters not provided)

> WebHDFS DataNode calls fail when boolean parameters not provided
> 
>
> Key: HDFS-10684
> URL: https://issues.apache.org/jira/browse/HDFS-10684
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.7.1
>Reporter: Samuel Low
>Assignee: John Zhuge
>
> Optional boolean parameters that are not provided in the URL cause the 
> WebHDFS create file command to fail.
> curl -i -X PUT 
> "http://hadoop-primarynamenode:50070/webhdfs/v1/tmp/test1234?op=CREATE=false;
> Response:
> HTTP/1.1 307 TEMPORARY_REDIRECT
> Cache-Control: no-cache
> Expires: Fri, 15 Jul 2016 04:10:13 GMT
> Date: Fri, 15 Jul 2016 04:10:13 GMT
> Pragma: no-cache
> Expires: Fri, 15 Jul 2016 04:10:13 GMT
> Date: Fri, 15 Jul 2016 04:10:13 GMT
> Pragma: no-cache
> Content-Type: application/octet-stream
> Location: 
> http://hadoop-datanode1:50075/webhdfs/v1/tmp/test1234?op=CREATE=hadoop-primarynamenode:8020=false
> Content-Length: 0
> Server: Jetty(6.1.26)
> Following the redirect:
> curl -i -X PUT -T MYFILE 
> "http://hadoop-datanode1:50075/webhdfs/v1/tmp/test1234?op=CREATE=hadoop-primarynamenode:8020=false;
> Response:
> HTTP/1.1 100 Continue
> HTTP/1.1 400 Bad Request
> Content-Type: application/json; charset=utf-8
> Content-Length: 162
> Connection: close
> 
> {"RemoteException":{"exception":"IllegalArgumentException","javaClassName":"java.lang.IllegalArgumentException","message":"Failed
>  to parse \"null\" to Boolean."}}
> The problem can be circumvented by providing both "createparent" and 
> "overwrite" parameters.
> However, this is not possible when I have no control over the WebHDFS calls, 
> e.g. Ambari and Hue have errors due to this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10676) Add namenode metric to measure time spent in generating EDEKs

2016-07-28 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398274#comment-15398274
 ] 

Arpit Agarwal commented on HDFS-10676:
--

+1 from me too. Thank you for the review [~xyao] and thanks for the 
contribution [~hanishakoneru].

> Add namenode metric to measure time spent in generating EDEKs
> -
>
> Key: HDFS-10676
> URL: https://issues.apache.org/jira/browse/HDFS-10676
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>  Labels: metrics, namenode
> Attachments: HDFS-10676.000.patch, HDFS-10676.001.patch, 
> HDFS-10676.002.patch, HDFS-10676.003.patch, HDFS-10676.004.patch, 
> HDFS-10676.005.patch, HDFS-10676.006.patch
>
>
> A metric to measure the time spent by Namenode in interacting with Key 
> Management System (KMS).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10676) Add namenode metric to measure time spent in generating EDEKs

2016-07-28 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398271#comment-15398271
 ] 

Xiaoyu Yao commented on HDFS-10676:
---

+1 for v006 patch. I will commit it by EOD today in case [~arpitagarwal]/others 
have additional feedback.

> Add namenode metric to measure time spent in generating EDEKs
> -
>
> Key: HDFS-10676
> URL: https://issues.apache.org/jira/browse/HDFS-10676
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>  Labels: metrics, namenode
> Attachments: HDFS-10676.000.patch, HDFS-10676.001.patch, 
> HDFS-10676.002.patch, HDFS-10676.003.patch, HDFS-10676.004.patch, 
> HDFS-10676.005.patch, HDFS-10676.006.patch
>
>
> A metric to measure the time spent by Namenode in interacting with Key 
> Management System (KMS).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10641) TestBlockManager#testBlockReportQueueing fails intermittently

2016-07-28 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398265#comment-15398265
 ] 

Mingliang Liu commented on HDFS-10641:
--

Thanks for the patch, [~daryn].

The {{blockingOp}} is a future task whose internal state is updated sometime 
after {{runBlockOp}} returns. The q length is an internal state of the bm. Once 
{{runBlockOp}} returns, {{blockingOp}} state is updated independently on the 
bm. This is indeed not a problem of the wait condition. The v0 patch was to 
wait for a longer time as {{blockingOp}} running state will be updated 
eventually.

I think the v1 patch is good as well since the assertion is depending on a 
dedicated latch instead of another thread's internal state.

> TestBlockManager#testBlockReportQueueing fails intermittently
> -
>
> Key: HDFS-10641
> URL: https://issues.apache.org/jira/browse/HDFS-10641
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode, test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-10641.000.patch, HDFS-10641.001.patch
>
>
> See example failing [stack 
> trace|https://builds.apache.org/job/PreCommit-HADOOP-Build/9996/testReport/org.apache.hadoop.hdfs.server.blockmanagement/TestBlockManager/testBlockReportQueueing/].
> h6. Stacktrace
> {code}
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testBlockReportQueueing(TestBlockManager.java:1074)
> {code}
> Recent build test failure 
> [1|https://builds.apache.org/job/PreCommit-HADOOP-Build/10100/testReport/] 
> and [2|https://builds.apache.org/job/PreCommit-HDFS-Build/16229/testReport/] 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-4176) EditLogTailer should call rollEdits with a timeout

2016-07-28 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-4176:

Attachment: HDFS-4176-branch-2.0.patch

Hi, [~jingzhao] 

I made a patch for {{branch-2}}. The conflicts between trunk and branch-2 are 
due to {{MultipleNameNodeProxy}} not exist on branch-2. So I put 
{{getActiveNodeProxy.rollEditLog()}} to a simple {{Callable#call()}}.

Would you mind to take another look? Thanks

> EditLogTailer should call rollEdits with a timeout
> --
>
> Key: HDFS-4176
> URL: https://issues.apache.org/jira/browse/HDFS-4176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, namenode
>Affects Versions: 2.0.2-alpha, 3.0.0-alpha1
>Reporter: Todd Lipcon
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-4176-branch-2.0.patch, HDFS-4176.00.patch, 
> HDFS-4176.01.patch, HDFS-4176.02.patch, HDFS-4176.03.patch, 
> HDFS-4176.04.patch, namenode.jstack4
>
>
> When the EditLogTailer thread calls rollEdits() on the active NN via RPC, it 
> currently does so without a timeout. So, if the active NN has frozen (but not 
> actually crashed), this call can hang forever. This can then potentially 
> prevent the standby from becoming active.
> This may actually considered a side effect of HADOOP-6762 -- if the RPC were 
> interruptible, that would also fix the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10641) TestBlockManager#testBlockReportQueueing fails intermittently

2016-07-28 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-10641:
---
Attachment: HDFS-10641.001.patch

The q length should always be zero at that point, so it shouldn't be part of 
the wait condition.  Checking if the future is done really just ensured that 
runBlockOp returned.  I think it's cleaner to just use another latch.

> TestBlockManager#testBlockReportQueueing fails intermittently
> -
>
> Key: HDFS-10641
> URL: https://issues.apache.org/jira/browse/HDFS-10641
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode, test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-10641.000.patch, HDFS-10641.001.patch
>
>
> See example failing [stack 
> trace|https://builds.apache.org/job/PreCommit-HADOOP-Build/9996/testReport/org.apache.hadoop.hdfs.server.blockmanagement/TestBlockManager/testBlockReportQueueing/].
> h6. Stacktrace
> {code}
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testBlockReportQueueing(TestBlockManager.java:1074)
> {code}
> Recent build test failure 
> [1|https://builds.apache.org/job/PreCommit-HADOOP-Build/10100/testReport/] 
> and [2|https://builds.apache.org/job/PreCommit-HDFS-Build/16229/testReport/] 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10672) libhdfs++: reorder directories in src/main/libhdfspp/examples, and add C++ version of cat tool

2016-07-28 Thread Anatoli Shein (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398222#comment-15398222
 ] 

Anatoli Shein commented on HDFS-10672:
--

Thank you, [~bobhansen] for your review. I am attaching the new patch with the 
following changes:

* Let's make both of the executables called "hdfs_cat". Anybody that wants to 
copy them into the same directory can rename them themselves.\
(/) Done. Also edited CMakeLists file for C version of cat to keep consistent.

* For the cpp version let's use the C++ APIs (defined hdfspp/hdfspp.h) in for 
the config, filesystem, and files, which will let us skip the Builder interface.
(/) Done. Builder interface is now gone.

> libhdfs++: reorder directories in src/main/libhdfspp/examples, and add C++ 
> version of cat tool
> --
>
> Key: HDFS-10672
> URL: https://issues.apache.org/jira/browse/HDFS-10672
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-10672.HDFS-8707.000.patch, 
> HDFS-10672.HDFS-8707.001.patch
>
>
> src/main/libhdfspp/examples should be structured like 
> examples/language/utility instead of examples/utility/language for easier 
> access by different developers.
> Additionally implementing C++ version of cat tool.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10672) libhdfs++: reorder directories in src/main/libhdfspp/examples, and add C++ version of cat tool

2016-07-28 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-10672:
-
Attachment: HDFS-10672.HDFS-8707.001.patch

Please review.

> libhdfs++: reorder directories in src/main/libhdfspp/examples, and add C++ 
> version of cat tool
> --
>
> Key: HDFS-10672
> URL: https://issues.apache.org/jira/browse/HDFS-10672
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-10672.HDFS-8707.000.patch, 
> HDFS-10672.HDFS-8707.001.patch
>
>
> src/main/libhdfspp/examples should be structured like 
> examples/language/utility instead of examples/utility/language for easier 
> access by different developers.
> Additionally implementing C++ version of cat tool.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-4176) EditLogTailer should call rollEdits with a timeout

2016-07-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398195#comment-15398195
 ] 

Hudson commented on HDFS-4176:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #10171 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10171/])
HDFS-4176. EditLogTailer should call rollEdits with a timeout. (Lei Xu) (lei: 
rev 67406460f0b6c05edde1d1185aeb42b6324df202)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/EditLogTailer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestEditLogTailer.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


> EditLogTailer should call rollEdits with a timeout
> --
>
> Key: HDFS-4176
> URL: https://issues.apache.org/jira/browse/HDFS-4176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, namenode
>Affects Versions: 2.0.2-alpha, 3.0.0-alpha1
>Reporter: Todd Lipcon
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-4176.00.patch, HDFS-4176.01.patch, 
> HDFS-4176.02.patch, HDFS-4176.03.patch, HDFS-4176.04.patch, namenode.jstack4
>
>
> When the EditLogTailer thread calls rollEdits() on the active NN via RPC, it 
> currently does so without a timeout. So, if the active NN has frozen (but not 
> actually crashed), this call can hang forever. This can then potentially 
> prevent the standby from becoming active.
> This may actually considered a side effect of HADOOP-6762 -- if the RPC were 
> interruptible, that would also fix the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10641) TestBlockManager#testBlockReportQueueing fails intermittently

2016-07-28 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398193#comment-15398193
 ] 

Mingliang Liu commented on HDFS-10641:
--

Obviously {{hadoop.cli.TestHDFSCLI}} is not related.

> TestBlockManager#testBlockReportQueueing fails intermittently
> -
>
> Key: HDFS-10641
> URL: https://issues.apache.org/jira/browse/HDFS-10641
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode, test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-10641.000.patch
>
>
> See example failing [stack 
> trace|https://builds.apache.org/job/PreCommit-HADOOP-Build/9996/testReport/org.apache.hadoop.hdfs.server.blockmanagement/TestBlockManager/testBlockReportQueueing/].
> h6. Stacktrace
> {code}
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testBlockReportQueueing(TestBlockManager.java:1074)
> {code}
> Recent build test failure 
> [1|https://builds.apache.org/job/PreCommit-HADOOP-Build/10100/testReport/] 
> and [2|https://builds.apache.org/job/PreCommit-HDFS-Build/16229/testReport/] 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10641) TestBlockManager#testBlockReportQueueing fails intermittently

2016-07-28 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398191#comment-15398191
 ] 

Daryn Sharp commented on HDFS-10641:


Taking a look now.

> TestBlockManager#testBlockReportQueueing fails intermittently
> -
>
> Key: HDFS-10641
> URL: https://issues.apache.org/jira/browse/HDFS-10641
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode, test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-10641.000.patch
>
>
> See example failing [stack 
> trace|https://builds.apache.org/job/PreCommit-HADOOP-Build/9996/testReport/org.apache.hadoop.hdfs.server.blockmanagement/TestBlockManager/testBlockReportQueueing/].
> h6. Stacktrace
> {code}
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testBlockReportQueueing(TestBlockManager.java:1074)
> {code}
> Recent build test failure 
> [1|https://builds.apache.org/job/PreCommit-HADOOP-Build/10100/testReport/] 
> and [2|https://builds.apache.org/job/PreCommit-HDFS-Build/16229/testReport/] 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10689) Hdfs dfs chmod should reset sticky bit permission when the bit is omitted in the octal mode

2016-07-28 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-10689:
-
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha1
   Status: Resolved  (was: Patch Available)

Committed to trunk.

> Hdfs dfs chmod should reset sticky bit permission when the bit is omitted in 
> the octal mode
> ---
>
> Key: HDFS-10689
> URL: https://issues.apache.org/jira/browse/HDFS-10689
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.4
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>Priority: Minor
> Fix For: 3.0.0-alpha1
>
> Attachments: HDFS-10689.001.patch, HDFS-10689.002.patch, 
> HDFS-10689.003.patch, HDFS-10689.004.patch
>
>
> When a directory permission is modified using hdfs dfs chmod command and when 
> octal/numeric format is used, the leading sticky bit is not fully honored.
> 1. Create a dir dir_test_with_sticky_bit
> 2. Apply sticky bit permission on the dir : hdfs dfs -chmod 1755 
> /dir_test_with_sticky_bit
> 3. Remove sticky bit permission on the dir: hdfs dfs -chmod 755 
> /dir_test_with_sticky_bit
> Expected: Remove the sticky bit on the dir, as it happens on Mac/Linux native 
> filesystem with native chmod.
> 4. However, removing sticky bit permission by explicitly turning off the bit 
> works. hdfs dfs -chmod 0755 /dir_test_with_sticky_bit
> {noformat}
> manoj@~/work/hadev-pp: hdfs dfs -chmod 1755 /dir_test_with_sticky_bit
> manoj@~/work/hadev-pp: hdfs dfs -ls /
> Found 2 items
> drwxr-xr-t   - manoj supergroup  0 2016-07-25 11:42 
> /dir_test_with_sticky_bit
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
> manoj@~/work/hadev-pp: hdfs dfs -chmod 755 /dir_test_with_sticky_bit
> manoj@~/work/hadev-pp: hdfs dfs -ls /
> Found 2 items
> drwxr-xr-t   - manoj supergroup  0 2016-07-25 11:42 
> /dir_test_with_sticky_bit  <=== sticky bit still intact
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
> manoj@~/work/hadev-pp: hdfs dfs -chmod 0755 /dir_test_with_sticky_bit
> manoj@~/work/hadev-pp: hdfs dfs -ls /
> Found 2 items
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 
> /dir_test_with_sticky_bit
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
> manoj@~/work/hadev-pp: 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10689) Hdfs dfs chmod should reset sticky bit permission when the bit is omitted in the octal mode

2016-07-28 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398175#comment-15398175
 ] 

Manoj Govindassamy commented on HDFS-10689:
---

TestLargeBlockReport, TestDataNodeHotSwapVolumes and TestHttpServerLifecycle 
test failures are not related to the patch.

> Hdfs dfs chmod should reset sticky bit permission when the bit is omitted in 
> the octal mode
> ---
>
> Key: HDFS-10689
> URL: https://issues.apache.org/jira/browse/HDFS-10689
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.4
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>Priority: Minor
> Attachments: HDFS-10689.001.patch, HDFS-10689.002.patch, 
> HDFS-10689.003.patch, HDFS-10689.004.patch
>
>
> When a directory permission is modified using hdfs dfs chmod command and when 
> octal/numeric format is used, the leading sticky bit is not fully honored.
> 1. Create a dir dir_test_with_sticky_bit
> 2. Apply sticky bit permission on the dir : hdfs dfs -chmod 1755 
> /dir_test_with_sticky_bit
> 3. Remove sticky bit permission on the dir: hdfs dfs -chmod 755 
> /dir_test_with_sticky_bit
> Expected: Remove the sticky bit on the dir, as it happens on Mac/Linux native 
> filesystem with native chmod.
> 4. However, removing sticky bit permission by explicitly turning off the bit 
> works. hdfs dfs -chmod 0755 /dir_test_with_sticky_bit
> {noformat}
> manoj@~/work/hadev-pp: hdfs dfs -chmod 1755 /dir_test_with_sticky_bit
> manoj@~/work/hadev-pp: hdfs dfs -ls /
> Found 2 items
> drwxr-xr-t   - manoj supergroup  0 2016-07-25 11:42 
> /dir_test_with_sticky_bit
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
> manoj@~/work/hadev-pp: hdfs dfs -chmod 755 /dir_test_with_sticky_bit
> manoj@~/work/hadev-pp: hdfs dfs -ls /
> Found 2 items
> drwxr-xr-t   - manoj supergroup  0 2016-07-25 11:42 
> /dir_test_with_sticky_bit  <=== sticky bit still intact
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
> manoj@~/work/hadev-pp: hdfs dfs -chmod 0755 /dir_test_with_sticky_bit
> manoj@~/work/hadev-pp: hdfs dfs -ls /
> Found 2 items
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 
> /dir_test_with_sticky_bit
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
> manoj@~/work/hadev-pp: 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10641) TestBlockManager#testBlockReportQueueing fails intermittently

2016-07-28 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-10641:
-
Description: 
See example failing [stack 
trace|https://builds.apache.org/job/PreCommit-HADOOP-Build/9996/testReport/org.apache.hadoop.hdfs.server.blockmanagement/TestBlockManager/testBlockReportQueueing/].

h6. Stacktrace
{code}
java.lang.AssertionError: null
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testBlockReportQueueing(TestBlockManager.java:1074)
{code}

Recent build test failure 
[1|https://builds.apache.org/job/PreCommit-HADOOP-Build/10100/testReport/] and 
[2|https://builds.apache.org/job/PreCommit-HDFS-Build/16229/testReport/] 

  was:
See example failing [stack 
trace|https://builds.apache.org/job/PreCommit-HADOOP-Build/9996/testReport/org.apache.hadoop.hdfs.server.blockmanagement/TestBlockManager/testBlockReportQueueing/].

h6. Stacktrace
{code}
java.lang.AssertionError: null
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testBlockReportQueueing(TestBlockManager.java:1074)
{code}


> TestBlockManager#testBlockReportQueueing fails intermittently
> -
>
> Key: HDFS-10641
> URL: https://issues.apache.org/jira/browse/HDFS-10641
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode, test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-10641.000.patch
>
>
> See example failing [stack 
> trace|https://builds.apache.org/job/PreCommit-HADOOP-Build/9996/testReport/org.apache.hadoop.hdfs.server.blockmanagement/TestBlockManager/testBlockReportQueueing/].
> h6. Stacktrace
> {code}
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testBlockReportQueueing(TestBlockManager.java:1074)
> {code}
> Recent build test failure 
> [1|https://builds.apache.org/job/PreCommit-HADOOP-Build/10100/testReport/] 
> and [2|https://builds.apache.org/job/PreCommit-HDFS-Build/16229/testReport/] 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10689) Hdfs dfs chmod should reset sticky bit permission when the bit is omitted in the octal mode

2016-07-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398160#comment-15398160
 ] 

Hadoop QA commented on HDFS-10689:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
36s{color} | {color:green} root: The patch generated 0 new + 219 unchanged - 2 
fixed = 219 total (was 221) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 39s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 78m 35s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}143m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestLargeBlockReport |
|   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
| Timed out junit tests | org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820753/HDFS-10689.004.patch |
| JIRA Issue | HDFS-10689 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a7bfbddc65f7 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 26de4f0 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16241/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16241/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test 

[jira] [Commented] (HDFS-10650) DFSClient#mkdirs and DFSClient#primitiveMkdir should use default directory permission

2016-07-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398158#comment-15398158
 ] 

Hudson commented on HDFS-10650:
---

SUCCESS: Integrated in Hadoop-trunk-Commit #10170 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10170/])
HDFS-10650. DFSClient#mkdirs and DFSClient#primitiveMkdir should use (xiao: rev 
328c855a578b90362d33dc822075f9b828df6f1e)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/security/TestPermission.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java


> DFSClient#mkdirs and DFSClient#primitiveMkdir should use default directory 
> permission
> -
>
> Key: HDFS-10650
> URL: https://issues.apache.org/jira/browse/HDFS-10650
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-10650.001.patch, HDFS-10650.002.patch
>
>
> These 2 DFSClient methods should use default directory permission to create a 
> directory.
> {code:java}
>   public boolean mkdirs(String src, FsPermission permission,
>   boolean createParent) throws IOException {
> if (permission == null) {
>   permission = FsPermission.getDefault();
> }
> {code}
> {code:java}
>   public boolean primitiveMkdir(String src, FsPermission absPermission, 
> boolean createParent)
> throws IOException {
> checkOpen();
> if (absPermission == null) {
>   absPermission = 
> FsPermission.getDefault().applyUMask(dfsClientConf.uMask);
> } 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10689) Hdfs dfs chmod should reset sticky bit permission when the bit is omitted in the octal mode

2016-07-28 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398150#comment-15398150
 ] 

Lei (Eddy) Xu commented on HDFS-10689:
--

+1 pending Jenkins. 

Thanks a lot for working on this, [~manojg]

> Hdfs dfs chmod should reset sticky bit permission when the bit is omitted in 
> the octal mode
> ---
>
> Key: HDFS-10689
> URL: https://issues.apache.org/jira/browse/HDFS-10689
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.4
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>Priority: Minor
> Attachments: HDFS-10689.001.patch, HDFS-10689.002.patch, 
> HDFS-10689.003.patch, HDFS-10689.004.patch
>
>
> When a directory permission is modified using hdfs dfs chmod command and when 
> octal/numeric format is used, the leading sticky bit is not fully honored.
> 1. Create a dir dir_test_with_sticky_bit
> 2. Apply sticky bit permission on the dir : hdfs dfs -chmod 1755 
> /dir_test_with_sticky_bit
> 3. Remove sticky bit permission on the dir: hdfs dfs -chmod 755 
> /dir_test_with_sticky_bit
> Expected: Remove the sticky bit on the dir, as it happens on Mac/Linux native 
> filesystem with native chmod.
> 4. However, removing sticky bit permission by explicitly turning off the bit 
> works. hdfs dfs -chmod 0755 /dir_test_with_sticky_bit
> {noformat}
> manoj@~/work/hadev-pp: hdfs dfs -chmod 1755 /dir_test_with_sticky_bit
> manoj@~/work/hadev-pp: hdfs dfs -ls /
> Found 2 items
> drwxr-xr-t   - manoj supergroup  0 2016-07-25 11:42 
> /dir_test_with_sticky_bit
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
> manoj@~/work/hadev-pp: hdfs dfs -chmod 755 /dir_test_with_sticky_bit
> manoj@~/work/hadev-pp: hdfs dfs -ls /
> Found 2 items
> drwxr-xr-t   - manoj supergroup  0 2016-07-25 11:42 
> /dir_test_with_sticky_bit  <=== sticky bit still intact
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
> manoj@~/work/hadev-pp: hdfs dfs -chmod 0755 /dir_test_with_sticky_bit
> manoj@~/work/hadev-pp: hdfs dfs -ls /
> Found 2 items
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 
> /dir_test_with_sticky_bit
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
> manoj@~/work/hadev-pp: 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10650) DFSClient#mkdirs and DFSClient#primitiveMkdir should use default directory permission

2016-07-28 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398140#comment-15398140
 ] 

John Zhuge commented on HDFS-10650:
---

Thanks @xiao for review and commit. Thanks [~cnauroth] for review.

> DFSClient#mkdirs and DFSClient#primitiveMkdir should use default directory 
> permission
> -
>
> Key: HDFS-10650
> URL: https://issues.apache.org/jira/browse/HDFS-10650
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-10650.001.patch, HDFS-10650.002.patch
>
>
> These 2 DFSClient methods should use default directory permission to create a 
> directory.
> {code:java}
>   public boolean mkdirs(String src, FsPermission permission,
>   boolean createParent) throws IOException {
> if (permission == null) {
>   permission = FsPermission.getDefault();
> }
> {code}
> {code:java}
>   public boolean primitiveMkdir(String src, FsPermission absPermission, 
> boolean createParent)
> throws IOException {
> checkOpen();
> if (absPermission == null) {
>   absPermission = 
> FsPermission.getDefault().applyUMask(dfsClientConf.uMask);
> } 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10650) DFSClient#mkdirs and DFSClient#primitiveMkdir should use default directory permission

2016-07-28 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-10650:
-
   Resolution: Fixed
 Hadoop Flags: Incompatible change,Reviewed  (was: Incompatible change)
Fix Version/s: 3.0.0-alpha2
   Status: Resolved  (was: Patch Available)

Committed to trunk, thanks [~jzhuge] for working on this, and [~cnauroth] for 
review!

> DFSClient#mkdirs and DFSClient#primitiveMkdir should use default directory 
> permission
> -
>
> Key: HDFS-10650
> URL: https://issues.apache.org/jira/browse/HDFS-10650
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-10650.001.patch, HDFS-10650.002.patch
>
>
> These 2 DFSClient methods should use default directory permission to create a 
> directory.
> {code:java}
>   public boolean mkdirs(String src, FsPermission permission,
>   boolean createParent) throws IOException {
> if (permission == null) {
>   permission = FsPermission.getDefault();
> }
> {code}
> {code:java}
>   public boolean primitiveMkdir(String src, FsPermission absPermission, 
> boolean createParent)
> throws IOException {
> checkOpen();
> if (absPermission == null) {
>   absPermission = 
> FsPermission.getDefault().applyUMask(dfsClientConf.uMask);
> } 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10650) DFSClient#mkdirs and DFSClient#primitiveMkdir should use default directory permission

2016-07-28 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398131#comment-15398131
 ] 

Xiao Chen commented on HDFS-10650:
--

+1 on patch 2, will commit shortly.

> DFSClient#mkdirs and DFSClient#primitiveMkdir should use default directory 
> permission
> -
>
> Key: HDFS-10650
> URL: https://issues.apache.org/jira/browse/HDFS-10650
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HDFS-10650.001.patch, HDFS-10650.002.patch
>
>
> These 2 DFSClient methods should use default directory permission to create a 
> directory.
> {code:java}
>   public boolean mkdirs(String src, FsPermission permission,
>   boolean createParent) throws IOException {
> if (permission == null) {
>   permission = FsPermission.getDefault();
> }
> {code}
> {code:java}
>   public boolean primitiveMkdir(String src, FsPermission absPermission, 
> boolean createParent)
> throws IOException {
> checkOpen();
> if (absPermission == null) {
>   absPermission = 
> FsPermission.getDefault().applyUMask(dfsClientConf.uMask);
> } 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-4176) EditLogTailer should call rollEdits with a timeout

2016-07-28 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398095#comment-15398095
 ] 

Jing Zhao commented on HDFS-4176:
-

+1 on the rebased patch.

> EditLogTailer should call rollEdits with a timeout
> --
>
> Key: HDFS-4176
> URL: https://issues.apache.org/jira/browse/HDFS-4176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, namenode
>Affects Versions: 2.0.2-alpha, 3.0.0-alpha1
>Reporter: Todd Lipcon
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-4176.00.patch, HDFS-4176.01.patch, 
> HDFS-4176.02.patch, HDFS-4176.03.patch, HDFS-4176.04.patch, namenode.jstack4
>
>
> When the EditLogTailer thread calls rollEdits() on the active NN via RPC, it 
> currently does so without a timeout. So, if the active NN has frozen (but not 
> actually crashed), this call can hang forever. This can then potentially 
> prevent the standby from becoming active.
> This may actually considered a side effect of HADOOP-6762 -- if the RPC were 
> interruptible, that would also fix the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org




[jira] [Commented] (HDFS-4176) EditLogTailer should call rollEdits with a timeout

2016-07-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398076#comment-15398076
 ] 

Hadoop QA commented on HDFS-4176:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 424 unchanged - 2 fixed = 424 total (was 426) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 23s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 95m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery |
|   | hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820739/HDFS-4176.04.patch |
| JIRA Issue | HDFS-4176 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 623876c7d94c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 26de4f0 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16240/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16240/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16240/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> EditLogTailer should call rollEdits with a timeout
> 

[jira] [Commented] (HDFS-10676) Add namenode metric to measure time spent in generating EDEKs

2016-07-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398071#comment-15398071
 ] 

Hadoop QA commented on HDFS-10676:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 70m 
54s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 94m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820742/HDFS-10676.006.patch |
| JIRA Issue | HDFS-10676 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0c309b952bf2 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 26de4f0 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16239/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16239/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add namenode metric to measure time spent in generating EDEKs
> -
>
> Key: HDFS-10676
> URL: https://issues.apache.org/jira/browse/HDFS-10676
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>  Labels: metrics, namenode
> Attachments: HDFS-10676.000.patch, HDFS-10676.001.patch, 
> HDFS-10676.002.patch, HDFS-10676.003.patch, HDFS-10676.004.patch, 
> 

[jira] [Updated] (HDFS-8897) Balancer should handle fs.defaultFS trailing slash in HA

2016-07-28 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-8897:
-
Attachment: HDFS-8897.003.patch

Patch 003:
* Fix checkstyle
* Only sanitize defaultUri for hdfs scheme
* Move new test code into a separate method because the original method is too 
long

> Balancer should handle fs.defaultFS trailing slash in HA
> 
>
> Key: HDFS-8897
> URL: https://issues.apache.org/jira/browse/HDFS-8897
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Affects Versions: 2.7.1
> Environment: Centos 6.6
>Reporter: LINTE
>Assignee: John Zhuge
> Attachments: HDFS-8897.001.patch, HDFS-8897.002.patch, 
> HDFS-8897.003.patch
>
>
> When balancer is launched, it should test if there is already a 
> /system/balancer.id file in HDFS.
> When the file doesn't exist, the balancer don't want to run : 
> 15/08/14 16:35:12 INFO balancer.Balancer: namenodes  = [hdfs://sandbox/, 
> hdfs://sandbox]
> 15/08/14 16:35:12 INFO balancer.Balancer: parameters = 
> Balancer.Parameters[BalancingPolicy.Node, threshold=10.0, max idle iteration 
> = 5, number of nodes to be excluded = 0, number of nodes to be included = 0]
> Time Stamp   Iteration#  Bytes Already Moved  Bytes Left To Move  
> Bytes Being Moved
> 15/08/14 16:35:14 INFO balancer.KeyManager: Block token params received from 
> NN: update interval=10hrs, 0sec, token lifetime=10hrs, 0sec
> 15/08/14 16:35:14 INFO block.BlockTokenSecretManager: Setting block keys
> 15/08/14 16:35:14 INFO balancer.KeyManager: Update block keys every 2hrs, 
> 30mins, 0sec
> 15/08/14 16:35:14 INFO block.BlockTokenSecretManager: Setting block keys
> 15/08/14 16:35:14 INFO balancer.KeyManager: Block token params received from 
> NN: update interval=10hrs, 0sec, token lifetime=10hrs, 0sec
> 15/08/14 16:35:14 INFO block.BlockTokenSecretManager: Setting block keys
> 15/08/14 16:35:14 INFO balancer.KeyManager: Update block keys every 2hrs, 
> 30mins, 0sec
> java.io.IOException: Another Balancer is running..  Exiting ...
> Aug 14, 2015 4:35:14 PM  Balancing took 2.408 seconds
> Looking at the audit log file when trying to run the balancer, the balancer 
> create the /system/balancer.id and then delete it on exiting ... 
> 2015-08-14 16:37:45,844 INFO FSNamesystem.audit: allowed=true   
> ugi=hdfs@SANDBOX.HADOOP (auth:KERBEROS) ip=/x.x.x.x   cmd=getfileinfo 
> src=/system/balancer.id dst=nullperm=null   proto=rpc
> 2015-08-14 16:37:45,900 INFO FSNamesystem.audit: allowed=true   
> ugi=hdfs@SANDBOX.HADOOP (auth:KERBEROS) ip=/x.x.x.x   cmd=create  
> src=/system/balancer.id dst=nullperm=hdfs:hadoop:rw-r-  
> proto=rpc
> 2015-08-14 16:37:45,919 INFO FSNamesystem.audit: allowed=true   
> ugi=hdfs@SANDBOX.HADOOP (auth:KERBEROS) ip=/x.x.x.x   cmd=getfileinfo 
> src=/system/balancer.id dst=nullperm=null   proto=rpc
> 2015-08-14 16:37:46,090 INFO FSNamesystem.audit: allowed=true   
> ugi=hdfs@SANDBOX.HADOOP (auth:KERBEROS) ip=/x.x.x.x   cmd=getfileinfo 
> src=/system/balancer.id dst=nullperm=null   proto=rpc
> 2015-08-14 16:37:46,112 INFO FSNamesystem.audit: allowed=true   
> ugi=hdfs@SANDBOX.HADOOP (auth:KERBEROS) ip=/x.x.x.x   cmd=getfileinfo 
> src=/system/balancer.id dst=nullperm=null   proto=rpc
> 2015-08-14 16:37:46,117 INFO FSNamesystem.audit: allowed=true   
> ugi=hdfs@SANDBOX.HADOOP (auth:KERBEROS) ip=/x.x.x.x   cmd=delete  
> src=/system/balancer.id dst=nullperm=null   proto=rpc
> The error seems to be located in 
> org/apache/hadoop/hdfs/server/balancer/NameNodeConnector.java 
> The function checkAndMarkRunning return null even if the /system/balancer.id 
> doesn't exist before entering this function; if it exists, then it is deleted 
> and the balancer exit with the same error.
> 
>   private OutputStream checkAndMarkRunning() throws IOException {
> try {
>   if (fs.exists(idPath)) {
> // try appending to it so that it will fail fast if another balancer 
> is
> // running.
> IOUtils.closeStream(fs.append(idPath));
> fs.delete(idPath, true);
>   }
>   final FSDataOutputStream fsout = fs.create(idPath, false);
>   // mark balancer idPath to be deleted during filesystem closure
>   fs.deleteOnExit(idPath);
>   if (write2IdFile) {
> fsout.writeBytes(InetAddress.getLocalHost().getHostName());
> fsout.hflush();
>   }
>   return fsout;
> } catch(RemoteException e) {
>   
> if(AlreadyBeingCreatedException.class.getName().equals(e.getClassName())){
> return null;
>   } else {
> throw e;
>   }
> }
>   }
> 
> Regards



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10689) Hdfs dfs chmod should reset sticky bit permission when the bit is omitted in the octal mode

2016-07-28 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-10689:
--
Attachment: HDFS-10689.004.patch

Attaching v004 patch - avoiding the usage of "*" in imports, and retaining 
imports old order.

> Hdfs dfs chmod should reset sticky bit permission when the bit is omitted in 
> the octal mode
> ---
>
> Key: HDFS-10689
> URL: https://issues.apache.org/jira/browse/HDFS-10689
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.4
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>Priority: Minor
> Attachments: HDFS-10689.001.patch, HDFS-10689.002.patch, 
> HDFS-10689.003.patch, HDFS-10689.004.patch
>
>
> When a directory permission is modified using hdfs dfs chmod command and when 
> octal/numeric format is used, the leading sticky bit is not fully honored.
> 1. Create a dir dir_test_with_sticky_bit
> 2. Apply sticky bit permission on the dir : hdfs dfs -chmod 1755 
> /dir_test_with_sticky_bit
> 3. Remove sticky bit permission on the dir: hdfs dfs -chmod 755 
> /dir_test_with_sticky_bit
> Expected: Remove the sticky bit on the dir, as it happens on Mac/Linux native 
> filesystem with native chmod.
> 4. However, removing sticky bit permission by explicitly turning off the bit 
> works. hdfs dfs -chmod 0755 /dir_test_with_sticky_bit
> {noformat}
> manoj@~/work/hadev-pp: hdfs dfs -chmod 1755 /dir_test_with_sticky_bit
> manoj@~/work/hadev-pp: hdfs dfs -ls /
> Found 2 items
> drwxr-xr-t   - manoj supergroup  0 2016-07-25 11:42 
> /dir_test_with_sticky_bit
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
> manoj@~/work/hadev-pp: hdfs dfs -chmod 755 /dir_test_with_sticky_bit
> manoj@~/work/hadev-pp: hdfs dfs -ls /
> Found 2 items
> drwxr-xr-t   - manoj supergroup  0 2016-07-25 11:42 
> /dir_test_with_sticky_bit  <=== sticky bit still intact
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
> manoj@~/work/hadev-pp: hdfs dfs -chmod 0755 /dir_test_with_sticky_bit
> manoj@~/work/hadev-pp: hdfs dfs -ls /
> Found 2 items
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 
> /dir_test_with_sticky_bit
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
> manoj@~/work/hadev-pp: 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9271) Implement basic NN operations

2016-07-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15397905#comment-15397905
 ] 

Hadoop QA commented on HDFS-9271:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m  
5s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
18s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
51s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
25s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
53s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  5m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
36s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  5m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
31s{color} | {color:green} hadoop-hdfs-native-client in the patch passed with 
JDK v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 70m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0cf5e66 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820731/HDFS-9271.HDFS-8707.006.patch
 |
| JIRA Issue | HDFS-9271 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 9a81e0d6140a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / ba6f2fe |
| Default Java | 1.7.0_101 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_101 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_101 |
| JDK v1.7.0_101  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16238/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16238/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Implement basic NN operations
> -
>
> Key: HDFS-9271
> URL: https://issues.apache.org/jira/browse/HDFS-9271
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Anatoli Shein
> Attachments: HDFS-9271.HDFS-8707.000.patch, 
> 

[jira] [Updated] (HDFS-10676) Add namenode metric to measure time spent in generating EDEKs

2016-07-28 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-10676:
--
Status: Patch Available  (was: In Progress)

> Add namenode metric to measure time spent in generating EDEKs
> -
>
> Key: HDFS-10676
> URL: https://issues.apache.org/jira/browse/HDFS-10676
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>  Labels: metrics, namenode
> Attachments: HDFS-10676.000.patch, HDFS-10676.001.patch, 
> HDFS-10676.002.patch, HDFS-10676.003.patch, HDFS-10676.004.patch, 
> HDFS-10676.005.patch, HDFS-10676.006.patch
>
>
> A metric to measure the time spent by Namenode in interacting with Key 
> Management System (KMS).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10676) Add namenode metric to measure time spent in generating EDEKs

2016-07-28 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-10676:
--
Attachment: HDFS-10676.006.patch

> Add namenode metric to measure time spent in generating EDEKs
> -
>
> Key: HDFS-10676
> URL: https://issues.apache.org/jira/browse/HDFS-10676
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>  Labels: metrics, namenode
> Attachments: HDFS-10676.000.patch, HDFS-10676.001.patch, 
> HDFS-10676.002.patch, HDFS-10676.003.patch, HDFS-10676.004.patch, 
> HDFS-10676.005.patch, HDFS-10676.006.patch
>
>
> A metric to measure the time spent by Namenode in interacting with Key 
> Management System (KMS).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10676) Add namenode metric to measure time spent in generating EDEKs

2016-07-28 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-10676:
--
Status: In Progress  (was: Patch Available)

> Add namenode metric to measure time spent in generating EDEKs
> -
>
> Key: HDFS-10676
> URL: https://issues.apache.org/jira/browse/HDFS-10676
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>  Labels: metrics, namenode
> Attachments: HDFS-10676.000.patch, HDFS-10676.001.patch, 
> HDFS-10676.002.patch, HDFS-10676.003.patch, HDFS-10676.004.patch, 
> HDFS-10676.005.patch, HDFS-10676.006.patch
>
>
> A metric to measure the time spent by Namenode in interacting with Key 
> Management System (KMS).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-4176) EditLogTailer should call rollEdits with a timeout

2016-07-28 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-4176:

Attachment: HDFS-4176.04.patch

Sure, [~jingzhao] 

Updated the patch to fix the conflicts from HDFS-10519.

> EditLogTailer should call rollEdits with a timeout
> --
>
> Key: HDFS-4176
> URL: https://issues.apache.org/jira/browse/HDFS-4176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, namenode
>Affects Versions: 2.0.2-alpha, 3.0.0-alpha1
>Reporter: Todd Lipcon
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-4176.00.patch, HDFS-4176.01.patch, 
> HDFS-4176.02.patch, HDFS-4176.03.patch, HDFS-4176.04.patch, namenode.jstack4
>
>
> When the EditLogTailer thread calls rollEdits() on the active NN via RPC, it 
> currently does so without a timeout. So, if the active NN has frozen (but not 
> actually crashed), this call can hang forever. This can then potentially 
> prevent the standby from becoming active.
> This may actually considered a side effect of HADOOP-6762 -- if the RPC were 
> interruptible, that would also fix the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9941) Do not log StandbyException on NN, other minor logging fixes

2016-07-28 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-9941:

Issue Type: Improvement  (was: Bug)

> Do not log StandbyException on NN, other minor logging fixes
> 
>
> Key: HDFS-9941
> URL: https://issues.apache.org/jira/browse/HDFS-9941
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.8.0
>
> Attachments: HDFS-9941-branch-2.03.patch, HDFS-9941.01.patch, 
> HDFS-9941.02.patch, HDFS-9941.03.patch
>
>
> The NameNode can skip logging StandbyException messages. These are seen 
> regularly in normal operation and convey no useful information.
> We no longer log the locations of newly allocated blocks in 2.8.0. The DN IDs 
> can be useful for debugging so let's add that back.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10690) Optimize insertion/removal of replica in ShortCircuitCache.java

2016-07-28 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15397871#comment-15397871
 ] 

Xiaoyu Yao commented on HDFS-10690:
---

[~fenghua_hu], thanks for reporting the issue and posting the fix. If the goal 
is to avoid the TreeMap overhead, can we use LinkedHashMap with the following 
constructor ({{accessOrder=true}}) to simplify the code changes? 

{code}
public LinkedHashMap(int initialCapacity,
 float loadFactor,
 boolean accessOrder)
Constructs an empty LinkedHashMap instance with the specified initial capacity, 
load factor and ordering mode.
Parameters:
initialCapacity - the initial capacity
loadFactor - the load factor
accessOrder - the ordering mode - true for access-order, false for 
insertion-order

{code}

> Optimize insertion/removal of replica in ShortCircuitCache.java
> ---
>
> Key: HDFS-10690
> URL: https://issues.apache.org/jira/browse/HDFS-10690
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.0.0-alpha2
>Reporter: Fenghua Hu
>Assignee: Fenghua Hu
> Attachments: HDFS-10690.001.patch
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Currently in ShortCircuitCache, two TreeMap objects are used to track the 
> cached replicas.
> private final TreeMap evictable = new TreeMap<>();
> private final TreeMap evictableMmapped = new 
> TreeMap<>();
> TreeMap employs Red-Black tree for sorting. This isn't an issue when using 
> traditional HDD. But when using high-performance SSD/PCIe Flash, the cost 
> inserting/removing an entry  becomes considerable.
> To mitigate it, we designed a new list-based for replica tracking.
> The list is a double-linked FIFO. FIFO is time-based, thus insertion is a 
> very low cost operation. On the other hand, list is not lookup-friendly. To 
> address this issue, we introduce two references into ShortCircuitReplica 
> object.
> ShortCircuitReplica next = null;
> ShortCircuitReplica prev = null;
> In this way, lookup is not needed when removing a replica from the list. We 
> only need to modify its predecessor's and successor's references in the lists.
> Our tests showed up to 15-50% performance improvement when using PCIe flash 
> as storage media.
> The original patch is against 2.6.4, now I am porting to Hadoop trunk, and 
> patch will be posted soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-4176) EditLogTailer should call rollEdits with a timeout

2016-07-28 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15397868#comment-15397868
 ] 

Jing Zhao commented on HDFS-4176:
-

[~eddyxu], looks like the patch needs a rebase due to the recent commit of 
HDFS-10519. Could you please update the patch accordingly? Thanks!

> EditLogTailer should call rollEdits with a timeout
> --
>
> Key: HDFS-4176
> URL: https://issues.apache.org/jira/browse/HDFS-4176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, namenode
>Affects Versions: 2.0.2-alpha, 3.0.0-alpha1
>Reporter: Todd Lipcon
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-4176.00.patch, HDFS-4176.01.patch, 
> HDFS-4176.02.patch, HDFS-4176.03.patch, namenode.jstack4
>
>
> When the EditLogTailer thread calls rollEdits() on the active NN via RPC, it 
> currently does so without a timeout. So, if the active NN has frozen (but not 
> actually crashed), this call can hang forever. This can then potentially 
> prevent the standby from becoming active.
> This may actually considered a side effect of HADOOP-6762 -- if the RPC were 
> interruptible, that would also fix the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-4176) EditLogTailer should call rollEdits with a timeout

2016-07-28 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15397852#comment-15397852
 ] 

Lei (Eddy) Xu commented on HDFS-4176:
-

Thanks for the review and commit, [~jingzhao]!

> EditLogTailer should call rollEdits with a timeout
> --
>
> Key: HDFS-4176
> URL: https://issues.apache.org/jira/browse/HDFS-4176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, namenode
>Affects Versions: 2.0.2-alpha, 3.0.0-alpha1
>Reporter: Todd Lipcon
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-4176.00.patch, HDFS-4176.01.patch, 
> HDFS-4176.02.patch, HDFS-4176.03.patch, namenode.jstack4
>
>
> When the EditLogTailer thread calls rollEdits() on the active NN via RPC, it 
> currently does so without a timeout. So, if the active NN has frozen (but not 
> actually crashed), this call can hang forever. This can then potentially 
> prevent the standby from becoming active.
> This may actually considered a side effect of HADOOP-6762 -- if the RPC were 
> interruptible, that would also fix the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10681) DiskBalancer: query command should report Plan file path apart from PlanID

2016-07-28 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15397846#comment-15397846
 ] 

Manoj Govindassamy commented on HDFS-10681:
---

Unit test failures are unrelated to patch. 

> DiskBalancer: query command should report Plan file path apart from PlanID
> --
>
> Key: HDFS-10681
> URL: https://issues.apache.org/jira/browse/HDFS-10681
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: diskbalancer
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>Priority: Minor
> Attachments: HDFS-10681.001.patch, HDFS-10681.002.patch
>
>
> DiskBalancer query command currently reports planID (SHA512 hex) only. 
> Currently ongoing disk balancing activity in a datanode can be cancelled 
> wither by planID + datanode_address or just by pointing to the right plan 
> file. Since there could be many plan files, to avoid ambiguity its better if 
> query command can report the plan file path also.
> {noformat}
> $ hdfs diskbalancer --help query 
> usage: hdfs diskbalancer -query   [options]
> Query Plan queries a given data node about the current state of disk
> balancer execution.
> --queryQueries the disk balancer status of a given datanode.
> Query command retrievs *the plan ID* and the current running state.
> {noformat}
> Sample query command output:
> {noformat}
> 16/06/20 15:42:16 INFO command.Command: Executing "query plan" command.
> Plan ID: 
> 04f41e2e1fa2d63558284be85155ea68154fb6ab435f1078c642d605d06626f176da16b321b35c99f1f6cd0cd77090c8743bb9a19190c4a01b5f8c51a515e240
>  Result: PLAN_UNDER_PROGRESS
> or
> 16/06/20 15:46:09 INFO command.Command: Executing "query plan" command.
> Plan ID: 
> 04f41e2e1fa2d63558284be85155ea68154fb6ab435f1078c642d605d06626f176da16b321b35c99f1f6cd0cd77090c8743bb9a19190c4a01b5f8c51a515e240
>  Result: PLAN_DONE
> {noformat}
> Cancel command syntax:
> {noformat}
> $ hdfs diskbalancer --help cancel
> *usage: hdfs diskbalancer -cancel  | -cancel  -node
> *
> Cancel command cancels a running disk balancer operation.
> --cancelCancels a running plan using a plan file.
> --node  Cancels a running plan using a plan ID and hostName
> Cancel command can be run via pointing to a plan file, or by reading the
> plan ID using the query command and then using planID and hostname.
> Examples of how to run this command are
> hdfs diskbalancer -cancel 
> hdfs diskbalancer -cancel  -node 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10625) VolumeScanner to report why a block is found bad

2016-07-28 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15397819#comment-15397819
 ] 

Yongjun Zhang commented on HDFS-10625:
--

Thanks [~vinayrpet] for the review and [~linyiqun] for the new rev.

The patch looks good to me. Does it look good to you Vinay? The unit test 
failures look unrelated. I know org.apache.hadoop.cli.TestHDFSCLI.testAll is 
solved by HDFS-10696.  Thanks.



>  VolumeScanner to report why a block is found bad
> -
>
> Key: HDFS-10625
> URL: https://issues.apache.org/jira/browse/HDFS-10625
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs
>Reporter: Yongjun Zhang
>Assignee: Rushabh S Shah
>  Labels: supportability
> Attachments: HDFS-10625-1.patch, HDFS-10625.003.patch, 
> HDFS-10625.patch
>
>
> VolumeScanner may report:
> {code}
> WARN org.apache.hadoop.hdfs.server.datanode.VolumeScanner: Reporting bad 
> blk_1170125248_96458336 on /d/dfs/dn
> {code}
> It would be helpful to report the reason why the block is bad, especially 
> when the block is corrupt, where is the first corrupted chunk in the block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10690) Optimize insertion/removal of replica in ShortCircuitCache.java

2016-07-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15397805#comment-15397805
 ] 

Hadoop QA commented on HDFS-10690:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
6s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 30s{color} | {color:orange} hadoop-hdfs-project: The patch generated 17 new 
+ 197 unchanged - 2 fixed = 214 total (was 199) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
38s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
55s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 17s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
32s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 90m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  Suspicious comparison of Long references in 
org.apache.hadoop.hdfs.shortcircuit.LruList.containsKey(Long)  At 
LruList.java:in org.apache.hadoop.hdfs.shortcircuit.LruList.containsKey(Long)  
At LruList.java:[line 153] |
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820709/HDFS-10690.001.patch |
| JIRA Issue | HDFS-10690 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a209889e560a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7f3c306 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 

[jira] [Commented] (HDFS-10690) Optimize insertion/removal of replica in ShortCircuitCache.java

2016-07-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15397798#comment-15397798
 ] 

Hadoop QA commented on HDFS-10690:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 30s{color} | {color:orange} hadoop-hdfs-project: The patch generated 17 new 
+ 196 unchanged - 2 fixed = 213 total (was 198) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
55s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
0s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 37s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
19s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}106m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  Suspicious comparison of Long references in 
org.apache.hadoop.hdfs.shortcircuit.LruList.containsKey(Long)  At 
LruList.java:in org.apache.hadoop.hdfs.shortcircuit.LruList.containsKey(Long)  
At LruList.java:[line 153] |
| Failed junit tests | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820708/lrulist.patch |
| JIRA Issue | HDFS-10690 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f13ea27f1665 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7f3c306 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| 

[jira] [Comment Edited] (HDFS-8897) Balancer should handle fs.defaultFS trailing slash in HA

2016-07-28 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15397791#comment-15397791
 ] 

John Zhuge edited comment on HDFS-8897 at 7/28/16 4:43 PM:
---

Fixing checkstyle and unt test failures.


was (Author: jzhuge):
Fixing checkstyle and TestBalancerWithMultipleNameNodes failure.

> Balancer should handle fs.defaultFS trailing slash in HA
> 
>
> Key: HDFS-8897
> URL: https://issues.apache.org/jira/browse/HDFS-8897
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Affects Versions: 2.7.1
> Environment: Centos 6.6
>Reporter: LINTE
>Assignee: John Zhuge
> Attachments: HDFS-8897.001.patch, HDFS-8897.002.patch
>
>
> When balancer is launched, it should test if there is already a 
> /system/balancer.id file in HDFS.
> When the file doesn't exist, the balancer don't want to run : 
> 15/08/14 16:35:12 INFO balancer.Balancer: namenodes  = [hdfs://sandbox/, 
> hdfs://sandbox]
> 15/08/14 16:35:12 INFO balancer.Balancer: parameters = 
> Balancer.Parameters[BalancingPolicy.Node, threshold=10.0, max idle iteration 
> = 5, number of nodes to be excluded = 0, number of nodes to be included = 0]
> Time Stamp   Iteration#  Bytes Already Moved  Bytes Left To Move  
> Bytes Being Moved
> 15/08/14 16:35:14 INFO balancer.KeyManager: Block token params received from 
> NN: update interval=10hrs, 0sec, token lifetime=10hrs, 0sec
> 15/08/14 16:35:14 INFO block.BlockTokenSecretManager: Setting block keys
> 15/08/14 16:35:14 INFO balancer.KeyManager: Update block keys every 2hrs, 
> 30mins, 0sec
> 15/08/14 16:35:14 INFO block.BlockTokenSecretManager: Setting block keys
> 15/08/14 16:35:14 INFO balancer.KeyManager: Block token params received from 
> NN: update interval=10hrs, 0sec, token lifetime=10hrs, 0sec
> 15/08/14 16:35:14 INFO block.BlockTokenSecretManager: Setting block keys
> 15/08/14 16:35:14 INFO balancer.KeyManager: Update block keys every 2hrs, 
> 30mins, 0sec
> java.io.IOException: Another Balancer is running..  Exiting ...
> Aug 14, 2015 4:35:14 PM  Balancing took 2.408 seconds
> Looking at the audit log file when trying to run the balancer, the balancer 
> create the /system/balancer.id and then delete it on exiting ... 
> 2015-08-14 16:37:45,844 INFO FSNamesystem.audit: allowed=true   
> ugi=hdfs@SANDBOX.HADOOP (auth:KERBEROS) ip=/x.x.x.x   cmd=getfileinfo 
> src=/system/balancer.id dst=nullperm=null   proto=rpc
> 2015-08-14 16:37:45,900 INFO FSNamesystem.audit: allowed=true   
> ugi=hdfs@SANDBOX.HADOOP (auth:KERBEROS) ip=/x.x.x.x   cmd=create  
> src=/system/balancer.id dst=nullperm=hdfs:hadoop:rw-r-  
> proto=rpc
> 2015-08-14 16:37:45,919 INFO FSNamesystem.audit: allowed=true   
> ugi=hdfs@SANDBOX.HADOOP (auth:KERBEROS) ip=/x.x.x.x   cmd=getfileinfo 
> src=/system/balancer.id dst=nullperm=null   proto=rpc
> 2015-08-14 16:37:46,090 INFO FSNamesystem.audit: allowed=true   
> ugi=hdfs@SANDBOX.HADOOP (auth:KERBEROS) ip=/x.x.x.x   cmd=getfileinfo 
> src=/system/balancer.id dst=nullperm=null   proto=rpc
> 2015-08-14 16:37:46,112 INFO FSNamesystem.audit: allowed=true   
> ugi=hdfs@SANDBOX.HADOOP (auth:KERBEROS) ip=/x.x.x.x   cmd=getfileinfo 
> src=/system/balancer.id dst=nullperm=null   proto=rpc
> 2015-08-14 16:37:46,117 INFO FSNamesystem.audit: allowed=true   
> ugi=hdfs@SANDBOX.HADOOP (auth:KERBEROS) ip=/x.x.x.x   cmd=delete  
> src=/system/balancer.id dst=nullperm=null   proto=rpc
> The error seems to be located in 
> org/apache/hadoop/hdfs/server/balancer/NameNodeConnector.java 
> The function checkAndMarkRunning return null even if the /system/balancer.id 
> doesn't exist before entering this function; if it exists, then it is deleted 
> and the balancer exit with the same error.
> 
>   private OutputStream checkAndMarkRunning() throws IOException {
> try {
>   if (fs.exists(idPath)) {
> // try appending to it so that it will fail fast if another balancer 
> is
> // running.
> IOUtils.closeStream(fs.append(idPath));
> fs.delete(idPath, true);
>   }
>   final FSDataOutputStream fsout = fs.create(idPath, false);
>   // mark balancer idPath to be deleted during filesystem closure
>   fs.deleteOnExit(idPath);
>   if (write2IdFile) {
> fsout.writeBytes(InetAddress.getLocalHost().getHostName());
> fsout.hflush();
>   }
>   return fsout;
> } catch(RemoteException e) {
>   
> if(AlreadyBeingCreatedException.class.getName().equals(e.getClassName())){
> return null;
>   } else {
> throw e;
>   }
> }
>   }
> 
> Regards



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

[jira] [Updated] (HDFS-9271) Implement basic NN operations

2016-07-28 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-9271:

Attachment: HDFS-9271.HDFS-8707.006.patch

Please review.

> Implement basic NN operations
> -
>
> Key: HDFS-9271
> URL: https://issues.apache.org/jira/browse/HDFS-9271
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Anatoli Shein
> Attachments: HDFS-9271.HDFS-8707.000.patch, 
> HDFS-9271.HDFS-8707.001.patch, HDFS-9271.HDFS-8707.002.patch, 
> HDFS-9271.HDFS-8707.003.patch, HDFS-9271.HDFS-8707.004.patch, 
> HDFS-9271.HDFS-8707.005.patch, HDFS-9271.HDFS-8707.006.patch
>
>
> Expose via C and C++ API:
> * mkdirs
> * rename
> * delete
> * stat
> * chmod
> * chown
> * getListing
> * setOwner



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9271) Implement basic NN operations

2016-07-28 Thread Anatoli Shein (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15397792#comment-15397792
 ] 

Anatoli Shein commented on HDFS-9271:
-

Thank you, [~bobhansen] for your review.

In the new patch I have addressed your comments in the following way:

test_libhdfs_threaded.c: why did we move doTestGetDefaultBlockSize?
(/) Apparently I had a bigger problem in DefaultBlockSize functions. They were 
not picking up the default value from the config file. This is fixed in the new 
patch. I just added another quick test in test_libhdfs_threaded to test an 
existing file also.
hdfds.cc: hdfsFileIsOpenForWrite and hdfsUnbufferFile should simply return 
false without an error code. We've implemented the functions, we just haven't 
implemented writing or buffering. 
(/) Done.
hdfds.cc: hdfsAvailalable should set errno to 0
(/) Done.
filesystem.h: Why does mkdirs have uint64_t permissions and SetPermissions has 
int16_t? Should they both be uint16_t since negative values are invalid?
(/) I have updated all of them to use uint_t16 now.
Not part of your refactoring, but looks like a problem: 
hdfs_shim.c: Shouldn't hdfsSeek/hdfsTell do a seek/tell on both the libhdfs and 
libhdfspp file handles?
(/) Yes, I fixed them to do seek/tell on both libhdfs and libhdfspp file 
handles.
Also not part of your refactoring, but necessary to get to "no libhdfs in 
non-write paths":
hdfs_shim.c: hdfsConfXXX should go through the libhdfspp interface
(/) Done.
Minor nits:
filesystem.cc: Why can't we set the high bits in GetBlockLocations? Include 
some comments with your experience/reasoning
(/) Added comments to both filesystem.cc and namenode_operations.cc.

> Implement basic NN operations
> -
>
> Key: HDFS-9271
> URL: https://issues.apache.org/jira/browse/HDFS-9271
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Anatoli Shein
> Attachments: HDFS-9271.HDFS-8707.000.patch, 
> HDFS-9271.HDFS-8707.001.patch, HDFS-9271.HDFS-8707.002.patch, 
> HDFS-9271.HDFS-8707.003.patch, HDFS-9271.HDFS-8707.004.patch, 
> HDFS-9271.HDFS-8707.005.patch
>
>
> Expose via C and C++ API:
> * mkdirs
> * rename
> * delete
> * stat
> * chmod
> * chown
> * getListing
> * setOwner



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8897) Balancer should handle fs.defaultFS trailing slash in HA

2016-07-28 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-8897:
-
Status: In Progress  (was: Patch Available)

Fixing checkstyle and TestBalancerWithMultipleNameNodes failure.

> Balancer should handle fs.defaultFS trailing slash in HA
> 
>
> Key: HDFS-8897
> URL: https://issues.apache.org/jira/browse/HDFS-8897
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Affects Versions: 2.7.1
> Environment: Centos 6.6
>Reporter: LINTE
>Assignee: John Zhuge
> Attachments: HDFS-8897.001.patch, HDFS-8897.002.patch
>
>
> When balancer is launched, it should test if there is already a 
> /system/balancer.id file in HDFS.
> When the file doesn't exist, the balancer don't want to run : 
> 15/08/14 16:35:12 INFO balancer.Balancer: namenodes  = [hdfs://sandbox/, 
> hdfs://sandbox]
> 15/08/14 16:35:12 INFO balancer.Balancer: parameters = 
> Balancer.Parameters[BalancingPolicy.Node, threshold=10.0, max idle iteration 
> = 5, number of nodes to be excluded = 0, number of nodes to be included = 0]
> Time Stamp   Iteration#  Bytes Already Moved  Bytes Left To Move  
> Bytes Being Moved
> 15/08/14 16:35:14 INFO balancer.KeyManager: Block token params received from 
> NN: update interval=10hrs, 0sec, token lifetime=10hrs, 0sec
> 15/08/14 16:35:14 INFO block.BlockTokenSecretManager: Setting block keys
> 15/08/14 16:35:14 INFO balancer.KeyManager: Update block keys every 2hrs, 
> 30mins, 0sec
> 15/08/14 16:35:14 INFO block.BlockTokenSecretManager: Setting block keys
> 15/08/14 16:35:14 INFO balancer.KeyManager: Block token params received from 
> NN: update interval=10hrs, 0sec, token lifetime=10hrs, 0sec
> 15/08/14 16:35:14 INFO block.BlockTokenSecretManager: Setting block keys
> 15/08/14 16:35:14 INFO balancer.KeyManager: Update block keys every 2hrs, 
> 30mins, 0sec
> java.io.IOException: Another Balancer is running..  Exiting ...
> Aug 14, 2015 4:35:14 PM  Balancing took 2.408 seconds
> Looking at the audit log file when trying to run the balancer, the balancer 
> create the /system/balancer.id and then delete it on exiting ... 
> 2015-08-14 16:37:45,844 INFO FSNamesystem.audit: allowed=true   
> ugi=hdfs@SANDBOX.HADOOP (auth:KERBEROS) ip=/x.x.x.x   cmd=getfileinfo 
> src=/system/balancer.id dst=nullperm=null   proto=rpc
> 2015-08-14 16:37:45,900 INFO FSNamesystem.audit: allowed=true   
> ugi=hdfs@SANDBOX.HADOOP (auth:KERBEROS) ip=/x.x.x.x   cmd=create  
> src=/system/balancer.id dst=nullperm=hdfs:hadoop:rw-r-  
> proto=rpc
> 2015-08-14 16:37:45,919 INFO FSNamesystem.audit: allowed=true   
> ugi=hdfs@SANDBOX.HADOOP (auth:KERBEROS) ip=/x.x.x.x   cmd=getfileinfo 
> src=/system/balancer.id dst=nullperm=null   proto=rpc
> 2015-08-14 16:37:46,090 INFO FSNamesystem.audit: allowed=true   
> ugi=hdfs@SANDBOX.HADOOP (auth:KERBEROS) ip=/x.x.x.x   cmd=getfileinfo 
> src=/system/balancer.id dst=nullperm=null   proto=rpc
> 2015-08-14 16:37:46,112 INFO FSNamesystem.audit: allowed=true   
> ugi=hdfs@SANDBOX.HADOOP (auth:KERBEROS) ip=/x.x.x.x   cmd=getfileinfo 
> src=/system/balancer.id dst=nullperm=null   proto=rpc
> 2015-08-14 16:37:46,117 INFO FSNamesystem.audit: allowed=true   
> ugi=hdfs@SANDBOX.HADOOP (auth:KERBEROS) ip=/x.x.x.x   cmd=delete  
> src=/system/balancer.id dst=nullperm=null   proto=rpc
> The error seems to be located in 
> org/apache/hadoop/hdfs/server/balancer/NameNodeConnector.java 
> The function checkAndMarkRunning return null even if the /system/balancer.id 
> doesn't exist before entering this function; if it exists, then it is deleted 
> and the balancer exit with the same error.
> 
>   private OutputStream checkAndMarkRunning() throws IOException {
> try {
>   if (fs.exists(idPath)) {
> // try appending to it so that it will fail fast if another balancer 
> is
> // running.
> IOUtils.closeStream(fs.append(idPath));
> fs.delete(idPath, true);
>   }
>   final FSDataOutputStream fsout = fs.create(idPath, false);
>   // mark balancer idPath to be deleted during filesystem closure
>   fs.deleteOnExit(idPath);
>   if (write2IdFile) {
> fsout.writeBytes(InetAddress.getLocalHost().getHostName());
> fsout.hflush();
>   }
>   return fsout;
> } catch(RemoteException e) {
>   
> if(AlreadyBeingCreatedException.class.getName().equals(e.getClassName())){
> return null;
>   } else {
> throw e;
>   }
> }
>   }
> 
> Regards



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: 

[jira] [Commented] (HDFS-10676) Add namenode metric to measure time spent in generating EDEKs

2016-07-28 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15397772#comment-15397772
 ] 

Xiaoyu Yao commented on HDFS-10676:
---

[~hkoneru], thanks for updating the patch. Please remove the unused imports as 
indicated in the checkstyle report. 
+1 after that. 

> Add namenode metric to measure time spent in generating EDEKs
> -
>
> Key: HDFS-10676
> URL: https://issues.apache.org/jira/browse/HDFS-10676
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>  Labels: metrics, namenode
> Attachments: HDFS-10676.000.patch, HDFS-10676.001.patch, 
> HDFS-10676.002.patch, HDFS-10676.003.patch, HDFS-10676.004.patch, 
> HDFS-10676.005.patch
>
>
> A metric to measure the time spent by Namenode in interacting with Key 
> Management System (KMS).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10696) TestHDFSCLI fails

2016-07-28 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15397769#comment-15397769
 ] 

Wei-Chiu Chuang commented on HDFS-10696:


[~xyao], [~lewuathe], [~ajisakaa] thanks for fixing the test failure!

> TestHDFSCLI fails
> -
>
> Key: HDFS-10696
> URL: https://issues.apache.org/jira/browse/HDFS-10696
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Kai Sasaki
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-10696.01.patch, HDFS-10696.02.patch
>
>
> TestHDFSCLI fails.
> {noformat}2016-07-27 19:53:20,790 [main] INFO  cli.CLITestHelper 
> (CLITestHelper.java:displayResults(177)) -  Comparator: 
> [RegexpComparator]
> 2016-07-27 19:53:20,790 [main] INFO  cli.CLITestHelper 
> (CLITestHelper.java:displayResults(179)) -  Comparision result:   
> [fail]
> 2016-07-27 19:53:20,791 [main] INFO  cli.CLITestHelper 
> (CLITestHelper.java:displayResults(181)) - Expected output:   [^( 
> |\t)*The storage type specific quota is cleared when -storageType option is 
> specified.( )*]
> 2016-07-27 19:53:20,791 [main] INFO  cli.CLITestHelper 
> (CLITestHelper.java:displayResults(183)) -   Actual output:   
> [-clrSpaceQuota [-storageType ] ...: Clear the 
> space quota for each directory .
> For each directory, attempt to clear the quota. An error will 
> be reported if
> 1. the directory does not exist or is a file, or
> 2. user is not an administrator.
> It does not fault if the directory has no quota.
> The storage type specific quota is cleared when -storageType 
> option is specified.   Available storageTypes are 
> - RAM_DISK
> - DISK
> - SSD
> - ARCHIVE
> ]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10696) TestHDFSCLI fails

2016-07-28 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15397763#comment-15397763
 ] 

Yongjun Zhang commented on HDFS-10696:
--

Hi [~kaisasak] and [~ajisakaa],

Thanks a lot for the quick turnaround!


> TestHDFSCLI fails
> -
>
> Key: HDFS-10696
> URL: https://issues.apache.org/jira/browse/HDFS-10696
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Kai Sasaki
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-10696.01.patch, HDFS-10696.02.patch
>
>
> TestHDFSCLI fails.
> {noformat}2016-07-27 19:53:20,790 [main] INFO  cli.CLITestHelper 
> (CLITestHelper.java:displayResults(177)) -  Comparator: 
> [RegexpComparator]
> 2016-07-27 19:53:20,790 [main] INFO  cli.CLITestHelper 
> (CLITestHelper.java:displayResults(179)) -  Comparision result:   
> [fail]
> 2016-07-27 19:53:20,791 [main] INFO  cli.CLITestHelper 
> (CLITestHelper.java:displayResults(181)) - Expected output:   [^( 
> |\t)*The storage type specific quota is cleared when -storageType option is 
> specified.( )*]
> 2016-07-27 19:53:20,791 [main] INFO  cli.CLITestHelper 
> (CLITestHelper.java:displayResults(183)) -   Actual output:   
> [-clrSpaceQuota [-storageType ] ...: Clear the 
> space quota for each directory .
> For each directory, attempt to clear the quota. An error will 
> be reported if
> 1. the directory does not exist or is a file, or
> 2. user is not an administrator.
> It does not fault if the directory has no quota.
> The storage type specific quota is cleared when -storageType 
> option is specified.   Available storageTypes are 
> - RAM_DISK
> - DISK
> - SSD
> - ARCHIVE
> ]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10690) Optimize insertion/removal of replica in ShortCircuitCache.java

2016-07-28 Thread Fenghua Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fenghua Hu updated HDFS-10690:
--
Attachment: HDFS-10690.001.patch

> Optimize insertion/removal of replica in ShortCircuitCache.java
> ---
>
> Key: HDFS-10690
> URL: https://issues.apache.org/jira/browse/HDFS-10690
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.0.0-alpha2
>Reporter: Fenghua Hu
>Assignee: Fenghua Hu
> Attachments: HDFS-10690.001.patch
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Currently in ShortCircuitCache, two TreeMap objects are used to track the 
> cached replicas.
> private final TreeMap evictable = new TreeMap<>();
> private final TreeMap evictableMmapped = new 
> TreeMap<>();
> TreeMap employs Red-Black tree for sorting. This isn't an issue when using 
> traditional HDD. But when using high-performance SSD/PCIe Flash, the cost 
> inserting/removing an entry  becomes considerable.
> To mitigate it, we designed a new list-based for replica tracking.
> The list is a double-linked FIFO. FIFO is time-based, thus insertion is a 
> very low cost operation. On the other hand, list is not lookup-friendly. To 
> address this issue, we introduce two references into ShortCircuitReplica 
> object.
> ShortCircuitReplica next = null;
> ShortCircuitReplica prev = null;
> In this way, lookup is not needed when removing a replica from the list. We 
> only need to modify its predecessor's and successor's references in the lists.
> Our tests showed up to 15-50% performance improvement when using PCIe flash 
> as storage media.
> The original patch is against 2.6.4, now I am porting to Hadoop trunk, and 
> patch will be posted soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10690) Optimize insertion/removal of replica in ShortCircuitCache.java

2016-07-28 Thread Fenghua Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fenghua Hu updated HDFS-10690:
--
Attachment: (was: lrulist.patch)

> Optimize insertion/removal of replica in ShortCircuitCache.java
> ---
>
> Key: HDFS-10690
> URL: https://issues.apache.org/jira/browse/HDFS-10690
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.0.0-alpha2
>Reporter: Fenghua Hu
>Assignee: Fenghua Hu
> Attachments: HDFS-10690.001.patch
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Currently in ShortCircuitCache, two TreeMap objects are used to track the 
> cached replicas.
> private final TreeMap evictable = new TreeMap<>();
> private final TreeMap evictableMmapped = new 
> TreeMap<>();
> TreeMap employs Red-Black tree for sorting. This isn't an issue when using 
> traditional HDD. But when using high-performance SSD/PCIe Flash, the cost 
> inserting/removing an entry  becomes considerable.
> To mitigate it, we designed a new list-based for replica tracking.
> The list is a double-linked FIFO. FIFO is time-based, thus insertion is a 
> very low cost operation. On the other hand, list is not lookup-friendly. To 
> address this issue, we introduce two references into ShortCircuitReplica 
> object.
> ShortCircuitReplica next = null;
> ShortCircuitReplica prev = null;
> In this way, lookup is not needed when removing a replica from the list. We 
> only need to modify its predecessor's and successor's references in the lists.
> Our tests showed up to 15-50% performance improvement when using PCIe flash 
> as storage media.
> The original patch is against 2.6.4, now I am porting to Hadoop trunk, and 
> patch will be posted soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10690) Optimize insertion/removal of replica in ShortCircuitCache.java

2016-07-28 Thread Fenghua Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fenghua Hu updated HDFS-10690:
--
Affects Version/s: (was: 2.6.4)
   3.0.0-alpha2
 Target Version/s: 3.0.0-beta1  (was: 3.0.0-alpha2)
   Status: Patch Available  (was: Open)

> Optimize insertion/removal of replica in ShortCircuitCache.java
> ---
>
> Key: HDFS-10690
> URL: https://issues.apache.org/jira/browse/HDFS-10690
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.0.0-alpha2
>Reporter: Fenghua Hu
>Assignee: Fenghua Hu
> Attachments: lrulist.patch
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Currently in ShortCircuitCache, two TreeMap objects are used to track the 
> cached replicas.
> private final TreeMap evictable = new TreeMap<>();
> private final TreeMap evictableMmapped = new 
> TreeMap<>();
> TreeMap employs Red-Black tree for sorting. This isn't an issue when using 
> traditional HDD. But when using high-performance SSD/PCIe Flash, the cost 
> inserting/removing an entry  becomes considerable.
> To mitigate it, we designed a new list-based for replica tracking.
> The list is a double-linked FIFO. FIFO is time-based, thus insertion is a 
> very low cost operation. On the other hand, list is not lookup-friendly. To 
> address this issue, we introduce two references into ShortCircuitReplica 
> object.
> ShortCircuitReplica next = null;
> ShortCircuitReplica prev = null;
> In this way, lookup is not needed when removing a replica from the list. We 
> only need to modify its predecessor's and successor's references in the lists.
> Our tests showed up to 15-50% performance improvement when using PCIe flash 
> as storage media.
> The original patch is against 2.6.4, now I am porting to Hadoop trunk, and 
> patch will be posted soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10690) Optimize insertion/removal of replica in ShortCircuitCache.java

2016-07-28 Thread Fenghua Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15397637#comment-15397637
 ] 

Fenghua Hu commented on HDFS-10690:
---

Some comments about the patch:
1. LruList.java implements a double-linked list to track the cached replicainfo 
objects.
2.Two references are added to ShortCircuitReplica object to eliminate the 
lookup in the list.

> Optimize insertion/removal of replica in ShortCircuitCache.java
> ---
>
> Key: HDFS-10690
> URL: https://issues.apache.org/jira/browse/HDFS-10690
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.6.4
>Reporter: Fenghua Hu
>Assignee: Fenghua Hu
> Attachments: lrulist.patch
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Currently in ShortCircuitCache, two TreeMap objects are used to track the 
> cached replicas.
> private final TreeMap evictable = new TreeMap<>();
> private final TreeMap evictableMmapped = new 
> TreeMap<>();
> TreeMap employs Red-Black tree for sorting. This isn't an issue when using 
> traditional HDD. But when using high-performance SSD/PCIe Flash, the cost 
> inserting/removing an entry  becomes considerable.
> To mitigate it, we designed a new list-based for replica tracking.
> The list is a double-linked FIFO. FIFO is time-based, thus insertion is a 
> very low cost operation. On the other hand, list is not lookup-friendly. To 
> address this issue, we introduce two references into ShortCircuitReplica 
> object.
> ShortCircuitReplica next = null;
> ShortCircuitReplica prev = null;
> In this way, lookup is not needed when removing a replica from the list. We 
> only need to modify its predecessor's and successor's references in the lists.
> Our tests showed up to 15-50% performance improvement when using PCIe flash 
> as storage media.
> The original patch is against 2.6.4, now I am porting to Hadoop trunk, and 
> patch will be posted soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10690) Optimize insertion/removal of replica in ShortCircuitCache.java

2016-07-28 Thread Fenghua Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fenghua Hu updated HDFS-10690:
--
Attachment: lrulist.patch

> Optimize insertion/removal of replica in ShortCircuitCache.java
> ---
>
> Key: HDFS-10690
> URL: https://issues.apache.org/jira/browse/HDFS-10690
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.6.4
>Reporter: Fenghua Hu
>Assignee: Fenghua Hu
> Attachments: lrulist.patch
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Currently in ShortCircuitCache, two TreeMap objects are used to track the 
> cached replicas.
> private final TreeMap evictable = new TreeMap<>();
> private final TreeMap evictableMmapped = new 
> TreeMap<>();
> TreeMap employs Red-Black tree for sorting. This isn't an issue when using 
> traditional HDD. But when using high-performance SSD/PCIe Flash, the cost 
> inserting/removing an entry  becomes considerable.
> To mitigate it, we designed a new list-based for replica tracking.
> The list is a double-linked FIFO. FIFO is time-based, thus insertion is a 
> very low cost operation. On the other hand, list is not lookup-friendly. To 
> address this issue, we introduce two references into ShortCircuitReplica 
> object.
> ShortCircuitReplica next = null;
> ShortCircuitReplica prev = null;
> In this way, lookup is not needed when removing a replica from the list. We 
> only need to modify its predecessor's and successor's references in the lists.
> Our tests showed up to 15-50% performance improvement when using PCIe flash 
> as storage media.
> The original patch is against 2.6.4, now I am porting to Hadoop trunk, and 
> patch will be posted soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10690) Optimize insertion/removal of replica in ShortCircuitCache.java

2016-07-28 Thread Fenghua Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fenghua Hu updated HDFS-10690:
--
Status: Open  (was: Patch Available)

> Optimize insertion/removal of replica in ShortCircuitCache.java
> ---
>
> Key: HDFS-10690
> URL: https://issues.apache.org/jira/browse/HDFS-10690
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.6.4
>Reporter: Fenghua Hu
>Assignee: Fenghua Hu
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Currently in ShortCircuitCache, two TreeMap objects are used to track the 
> cached replicas.
> private final TreeMap evictable = new TreeMap<>();
> private final TreeMap evictableMmapped = new 
> TreeMap<>();
> TreeMap employs Red-Black tree for sorting. This isn't an issue when using 
> traditional HDD. But when using high-performance SSD/PCIe Flash, the cost 
> inserting/removing an entry  becomes considerable.
> To mitigate it, we designed a new list-based for replica tracking.
> The list is a double-linked FIFO. FIFO is time-based, thus insertion is a 
> very low cost operation. On the other hand, list is not lookup-friendly. To 
> address this issue, we introduce two references into ShortCircuitReplica 
> object.
> ShortCircuitReplica next = null;
> ShortCircuitReplica prev = null;
> In this way, lookup is not needed when removing a replica from the list. We 
> only need to modify its predecessor's and successor's references in the lists.
> Our tests showed up to 15-50% performance improvement when using PCIe flash 
> as storage media.
> The original patch is against 2.6.4, now I am porting to Hadoop trunk, and 
> patch will be posted soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10690) Optimize insertion/removal of replica in ShortCircuitCache.java

2016-07-28 Thread Fenghua Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fenghua Hu updated HDFS-10690:
--
Affects Version/s: 2.6.4
 Target Version/s: 3.0.0-alpha2
 Tags: ShortCircuitCache
   Status: Patch Available  (was: Open)

> Optimize insertion/removal of replica in ShortCircuitCache.java
> ---
>
> Key: HDFS-10690
> URL: https://issues.apache.org/jira/browse/HDFS-10690
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.6.4
>Reporter: Fenghua Hu
>Assignee: Fenghua Hu
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Currently in ShortCircuitCache, two TreeMap objects are used to track the 
> cached replicas.
> private final TreeMap evictable = new TreeMap<>();
> private final TreeMap evictableMmapped = new 
> TreeMap<>();
> TreeMap employs Red-Black tree for sorting. This isn't an issue when using 
> traditional HDD. But when using high-performance SSD/PCIe Flash, the cost 
> inserting/removing an entry  becomes considerable.
> To mitigate it, we designed a new list-based for replica tracking.
> The list is a double-linked FIFO. FIFO is time-based, thus insertion is a 
> very low cost operation. On the other hand, list is not lookup-friendly. To 
> address this issue, we introduce two references into ShortCircuitReplica 
> object.
> ShortCircuitReplica next = null;
> ShortCircuitReplica prev = null;
> In this way, lookup is not needed when removing a replica from the list. We 
> only need to modify its predecessor's and successor's references in the lists.
> Our tests showed up to 15-50% performance improvement when using PCIe flash 
> as storage media.
> The original patch is against 2.6.4, now I am porting to Hadoop trunk, and 
> patch will be posted soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10441) libhdfs++: HA namenode support

2016-07-28 Thread Bob Hansen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15397547#comment-15397547
 ] 

Bob Hansen commented on HDFS-10441:
---

James - thanks for the responses.  

+1

> libhdfs++: HA namenode support
> --
>
> Key: HDFS-10441
> URL: https://issues.apache.org/jira/browse/HDFS-10441
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-10441.HDFS-8707.000.patch, 
> HDFS-10441.HDFS-8707.002.patch, HDFS-10441.HDFS-8707.003.patch, 
> HDFS-10441.HDFS-8707.004.patch, HDFS-10441.HDFS-8707.005.patch, 
> HDFS-10441.HDFS-8707.006.patch, HDFS-10441.HDFS-8707.007.patch, 
> HDFS-10441.HDFS-8707.008.patch, HDFS-10441.HDFS-8707.009.patch, 
> HDFS-10441.HDFS-8707.010.patch, HDFS-10441.HDFS-8707.011.patch, 
> HDFS-10441.HDFS-8707.012.patch, HDFS-10441.HDFS-8707.013.patch, 
> HDFS-10441.HDFS-8707.014.patch, HDFS-8707.HDFS-10441.001.patch
>
>
> If a cluster is HA enabled then do proper failover.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10457) DataNode should not auto-format block pool directory if VERSION is missing

2016-07-28 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15397466#comment-15397466
 ] 

Wei-Chiu Chuang commented on HDFS-10457:


TestHDFSCLI was just fixed in HDFS-10696. Other than that, the rest of tests 
passed in my local tree.

> DataNode should not auto-format block pool directory if VERSION is missing
> --
>
> Key: HDFS-10457
> URL: https://issues.apache.org/jira/browse/HDFS-10457
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-10457.001.patch, HDFS-10457.002.patch
>
>
> HDFS-10360 prevents DN to auto-formats a volume directory if the 
> current/VERSION is missing. However, if instead, the current/VERSION in a 
> block pool directory is missing, DN still auto-formats the directory.
> Filing this jira to fix the bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10701) TestDFSStripedOutputStreamWithFailure#testBlockTokenExpired occasionally fails

2016-07-28 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-10701:
---
Component/s: erasure-coding

> TestDFSStripedOutputStreamWithFailure#testBlockTokenExpired occasionally fails
> --
>
> Key: HDFS-10701
> URL: https://issues.apache.org/jira/browse/HDFS-10701
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha2
>Reporter: Wei-Chiu Chuang
>
> I noticed this test failure in a recent precommit build, and I also found 
> this test had failed for a few times in Hadoop-Hdfs-trunk build in the past. 
> But I do not have sufficient knowledge to tell if it's a flaky test or a bug 
> in the code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10701) TestDFSStripedOutputStreamWithFailure#testBlockTokenExpired occasionally fails

2016-07-28 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-10701:
--

 Summary: 
TestDFSStripedOutputStreamWithFailure#testBlockTokenExpired occasionally fails
 Key: HDFS-10701
 URL: https://issues.apache.org/jira/browse/HDFS-10701
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0-alpha2
Reporter: Wei-Chiu Chuang


I noticed this test failure in a recent precommit build, and I also found this 
test had failed for a few times in Hadoop-Hdfs-trunk build in the past. But I 
do not have sufficient knowledge to tell if it's a flaky test or a bug in the 
code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8897) Balancer should handle fs.defaultFS trailing slash in HA

2016-07-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15397418#comment-15397418
 ] 

Hadoop QA commented on HDFS-8897:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 24s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 135 unchanged - 2 fixed = 136 total (was 137) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 46s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 99m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.mover.TestMover |
|   | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820683/HDFS-8897.002.patch |
| JIRA Issue | HDFS-8897 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0762b310c5a0 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 414fbfa |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16234/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16234/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16234/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16234/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Balancer should handle fs.defaultFS 

[jira] [Commented] (HDFS-10691) FileDistribution fails in hdfs oiv command due to ArrayIndexOutOfBoundsException

2016-07-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15397406#comment-15397406
 ] 

Hadoop QA commented on HDFS-10691:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 57m 
57s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820686/HDFS-10691.003.patch |
| JIRA Issue | HDFS-10691 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 03e832ae67e9 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 414fbfa |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16235/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16235/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> FileDistribution fails in hdfs oiv command due to 
> ArrayIndexOutOfBoundsException
> 
>
> Key: HDFS-10691
> URL: https://issues.apache.org/jira/browse/HDFS-10691
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-10691.001.patch, HDFS-10691.002.patch, 
> HDFS-10691.003.patch
>
>
> I use hdfs oiv -p FileDistribution command to do a file analyse. But the 
> 

[jira] [Commented] (HDFS-10691) FileDistribution fails in hdfs oiv command due to ArrayIndexOutOfBoundsException

2016-07-28 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15397342#comment-15397342
 ] 

Yiqun Lin commented on HDFS-10691:
--

Thanks [~ajisakaa] for the review. Your suggestion seems good, but I think it 
will be better to make a change like this and not to change the orginal logic 
for calculating the bucket value.
{code}
 int bucket = fileSize > maxSize ? distribution.length - 1 : (int) Math
 .ceil((double)fileSize / steps);
if (bucket >= distribution.length) {
  bucket = distribution.length - 1;
}
{code}
Because if the fileSize is already more than maxSize, it can just return 
{{distribution.length - 1}} and there is no need to do the {{(int) 
Math.ceil((double)fileSize / steps);}}.

Post a new patch to simlify the logic of this and fix checkstyle warnings.

> FileDistribution fails in hdfs oiv command due to 
> ArrayIndexOutOfBoundsException
> 
>
> Key: HDFS-10691
> URL: https://issues.apache.org/jira/browse/HDFS-10691
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-10691.001.patch, HDFS-10691.002.patch, 
> HDFS-10691.003.patch
>
>
> I use hdfs oiv -p FileDistribution command to do a file analyse. But the 
> {{ArrayIndexOutOfBoundsException}} happened and lead the process terminated. 
> The stack infos:
> {code}
> Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 103
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.FileDistributionCalculator.run(FileDistributionCalculator.java:243)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.FileDistributionCalculator.visit(FileDistributionCalculator.java:176)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(OfflineImageViewerPB.java:176)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.main(OfflineImageViewerPB.java:129)
> {code}
> I looked into the code and I found the exception was threw in increasing 
> count of {{distribution}}. And the reason for the exception is that the 
> bucket number was more than the distribution's length.
> Here are my steps:
> 1).The input command params:
> {code}
> hdfs oiv -p FileDistribution -maxSize 104857600 -step 1024000
> {code}
> The {{numIntervals}} in code should be 104857600/1024000 =102(real 
> value:102.4), so the {{distribution}}'s length should be {{numIntervals}} + 1 
> = 103.
> 2).The {{ArrayIndexOutOfBoundsException}} will happens when the file size is 
> in range ((maxSize/step)*step, maxSize]. For example, if the size of one file 
> is 10480, and it's in range of size as mentioned before. And the bucket 
> number is calculated as 10480/1024000=102.3, then in code we do the 
> {{Math.ceil}} of this, so the final value should be 103. But the 
> {{distribution}}'s length is also 103, it means the index is from 0 to 102. 
> So the {{ArrayIndexOutOfBoundsException}} happens.
> In a word, the exception will happens when {{maxSize}} can not be divided by 
> {{step}} and meanwhile the size of file is in range ((maxSize/step)*step, 
> maxSize]. The related logic should be changed from 
> {code}
> int bucket = fileSize > maxSize ? distribution.length - 1 : (int) Math
> .ceil((double)fileSize / steps);
> {code}
> to 
> {code}
> int bucket =
> fileSize >= maxSize || fileSize > (maxSize / steps) * steps ?
> distribution.length - 1 : (int) Math.ceil((double) fileSize / 
> steps);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10691) FileDistribution fails in hdfs oiv command due to ArrayIndexOutOfBoundsException

2016-07-28 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10691:
-
Attachment: HDFS-10691.003.patch

> FileDistribution fails in hdfs oiv command due to 
> ArrayIndexOutOfBoundsException
> 
>
> Key: HDFS-10691
> URL: https://issues.apache.org/jira/browse/HDFS-10691
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-10691.001.patch, HDFS-10691.002.patch, 
> HDFS-10691.003.patch
>
>
> I use hdfs oiv -p FileDistribution command to do a file analyse. But the 
> {{ArrayIndexOutOfBoundsException}} happened and lead the process terminated. 
> The stack infos:
> {code}
> Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 103
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.FileDistributionCalculator.run(FileDistributionCalculator.java:243)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.FileDistributionCalculator.visit(FileDistributionCalculator.java:176)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(OfflineImageViewerPB.java:176)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.main(OfflineImageViewerPB.java:129)
> {code}
> I looked into the code and I found the exception was threw in increasing 
> count of {{distribution}}. And the reason for the exception is that the 
> bucket number was more than the distribution's length.
> Here are my steps:
> 1).The input command params:
> {code}
> hdfs oiv -p FileDistribution -maxSize 104857600 -step 1024000
> {code}
> The {{numIntervals}} in code should be 104857600/1024000 =102(real 
> value:102.4), so the {{distribution}}'s length should be {{numIntervals}} + 1 
> = 103.
> 2).The {{ArrayIndexOutOfBoundsException}} will happens when the file size is 
> in range ((maxSize/step)*step, maxSize]. For example, if the size of one file 
> is 10480, and it's in range of size as mentioned before. And the bucket 
> number is calculated as 10480/1024000=102.3, then in code we do the 
> {{Math.ceil}} of this, so the final value should be 103. But the 
> {{distribution}}'s length is also 103, it means the index is from 0 to 102. 
> So the {{ArrayIndexOutOfBoundsException}} happens.
> In a word, the exception will happens when {{maxSize}} can not be divided by 
> {{step}} and meanwhile the size of file is in range ((maxSize/step)*step, 
> maxSize]. The related logic should be changed from 
> {code}
> int bucket = fileSize > maxSize ? distribution.length - 1 : (int) Math
> .ceil((double)fileSize / steps);
> {code}
> to 
> {code}
> int bucket =
> fileSize >= maxSize || fileSize > (maxSize / steps) * steps ?
> distribution.length - 1 : (int) Math.ceil((double) fileSize / 
> steps);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8897) Balancer should handle fs.defaultFS trailing slash in HA

2016-07-28 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-8897:
-
Attachment: HDFS-8897.002.patch

Patch 002:
* Fix {{DFSUtil#getNameServiceUris}} to sanitize {{fs.defaultFS}} property
* Update a unit test in {{TestDFSUtil}}


> Balancer should handle fs.defaultFS trailing slash in HA
> 
>
> Key: HDFS-8897
> URL: https://issues.apache.org/jira/browse/HDFS-8897
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Affects Versions: 2.7.1
> Environment: Centos 6.6
>Reporter: LINTE
>Assignee: John Zhuge
> Attachments: HDFS-8897.001.patch, HDFS-8897.002.patch
>
>
> When balancer is launched, it should test if there is already a 
> /system/balancer.id file in HDFS.
> When the file doesn't exist, the balancer don't want to run : 
> 15/08/14 16:35:12 INFO balancer.Balancer: namenodes  = [hdfs://sandbox/, 
> hdfs://sandbox]
> 15/08/14 16:35:12 INFO balancer.Balancer: parameters = 
> Balancer.Parameters[BalancingPolicy.Node, threshold=10.0, max idle iteration 
> = 5, number of nodes to be excluded = 0, number of nodes to be included = 0]
> Time Stamp   Iteration#  Bytes Already Moved  Bytes Left To Move  
> Bytes Being Moved
> 15/08/14 16:35:14 INFO balancer.KeyManager: Block token params received from 
> NN: update interval=10hrs, 0sec, token lifetime=10hrs, 0sec
> 15/08/14 16:35:14 INFO block.BlockTokenSecretManager: Setting block keys
> 15/08/14 16:35:14 INFO balancer.KeyManager: Update block keys every 2hrs, 
> 30mins, 0sec
> 15/08/14 16:35:14 INFO block.BlockTokenSecretManager: Setting block keys
> 15/08/14 16:35:14 INFO balancer.KeyManager: Block token params received from 
> NN: update interval=10hrs, 0sec, token lifetime=10hrs, 0sec
> 15/08/14 16:35:14 INFO block.BlockTokenSecretManager: Setting block keys
> 15/08/14 16:35:14 INFO balancer.KeyManager: Update block keys every 2hrs, 
> 30mins, 0sec
> java.io.IOException: Another Balancer is running..  Exiting ...
> Aug 14, 2015 4:35:14 PM  Balancing took 2.408 seconds
> Looking at the audit log file when trying to run the balancer, the balancer 
> create the /system/balancer.id and then delete it on exiting ... 
> 2015-08-14 16:37:45,844 INFO FSNamesystem.audit: allowed=true   
> ugi=hdfs@SANDBOX.HADOOP (auth:KERBEROS) ip=/x.x.x.x   cmd=getfileinfo 
> src=/system/balancer.id dst=nullperm=null   proto=rpc
> 2015-08-14 16:37:45,900 INFO FSNamesystem.audit: allowed=true   
> ugi=hdfs@SANDBOX.HADOOP (auth:KERBEROS) ip=/x.x.x.x   cmd=create  
> src=/system/balancer.id dst=nullperm=hdfs:hadoop:rw-r-  
> proto=rpc
> 2015-08-14 16:37:45,919 INFO FSNamesystem.audit: allowed=true   
> ugi=hdfs@SANDBOX.HADOOP (auth:KERBEROS) ip=/x.x.x.x   cmd=getfileinfo 
> src=/system/balancer.id dst=nullperm=null   proto=rpc
> 2015-08-14 16:37:46,090 INFO FSNamesystem.audit: allowed=true   
> ugi=hdfs@SANDBOX.HADOOP (auth:KERBEROS) ip=/x.x.x.x   cmd=getfileinfo 
> src=/system/balancer.id dst=nullperm=null   proto=rpc
> 2015-08-14 16:37:46,112 INFO FSNamesystem.audit: allowed=true   
> ugi=hdfs@SANDBOX.HADOOP (auth:KERBEROS) ip=/x.x.x.x   cmd=getfileinfo 
> src=/system/balancer.id dst=nullperm=null   proto=rpc
> 2015-08-14 16:37:46,117 INFO FSNamesystem.audit: allowed=true   
> ugi=hdfs@SANDBOX.HADOOP (auth:KERBEROS) ip=/x.x.x.x   cmd=delete  
> src=/system/balancer.id dst=nullperm=null   proto=rpc
> The error seems to be located in 
> org/apache/hadoop/hdfs/server/balancer/NameNodeConnector.java 
> The function checkAndMarkRunning return null even if the /system/balancer.id 
> doesn't exist before entering this function; if it exists, then it is deleted 
> and the balancer exit with the same error.
> 
>   private OutputStream checkAndMarkRunning() throws IOException {
> try {
>   if (fs.exists(idPath)) {
> // try appending to it so that it will fail fast if another balancer 
> is
> // running.
> IOUtils.closeStream(fs.append(idPath));
> fs.delete(idPath, true);
>   }
>   final FSDataOutputStream fsout = fs.create(idPath, false);
>   // mark balancer idPath to be deleted during filesystem closure
>   fs.deleteOnExit(idPath);
>   if (write2IdFile) {
> fsout.writeBytes(InetAddress.getLocalHost().getHostName());
> fsout.hflush();
>   }
>   return fsout;
> } catch(RemoteException e) {
>   
> if(AlreadyBeingCreatedException.class.getName().equals(e.getClassName())){
> return null;
>   } else {
> throw e;
>   }
> }
>   }
> 
> Regards



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10696) TestHDFSCLI fails

2016-07-28 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15397293#comment-15397293
 ] 

Kai Sasaki commented on HDFS-10696:
---

[~xyao] [~ajisakaa] Thanks so much!

> TestHDFSCLI fails
> -
>
> Key: HDFS-10696
> URL: https://issues.apache.org/jira/browse/HDFS-10696
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Kai Sasaki
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-10696.01.patch, HDFS-10696.02.patch
>
>
> TestHDFSCLI fails.
> {noformat}2016-07-27 19:53:20,790 [main] INFO  cli.CLITestHelper 
> (CLITestHelper.java:displayResults(177)) -  Comparator: 
> [RegexpComparator]
> 2016-07-27 19:53:20,790 [main] INFO  cli.CLITestHelper 
> (CLITestHelper.java:displayResults(179)) -  Comparision result:   
> [fail]
> 2016-07-27 19:53:20,791 [main] INFO  cli.CLITestHelper 
> (CLITestHelper.java:displayResults(181)) - Expected output:   [^( 
> |\t)*The storage type specific quota is cleared when -storageType option is 
> specified.( )*]
> 2016-07-27 19:53:20,791 [main] INFO  cli.CLITestHelper 
> (CLITestHelper.java:displayResults(183)) -   Actual output:   
> [-clrSpaceQuota [-storageType ] ...: Clear the 
> space quota for each directory .
> For each directory, attempt to clear the quota. An error will 
> be reported if
> 1. the directory does not exist or is a file, or
> 2. user is not an administrator.
> It does not fault if the directory has no quota.
> The storage type specific quota is cleared when -storageType 
> option is specified.   Available storageTypes are 
> - RAM_DISK
> - DISK
> - SSD
> - ARCHIVE
> ]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10645) Make block report size as a metric and add this metric to datanode web ui

2016-07-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15397284#comment-15397284
 ] 

Hadoop QA commented on HDFS-10645:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
10s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 62m 15s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}109m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.cli.TestHDFSCLI |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820664/HDFS-10645.005.patch |
| JIRA Issue | HDFS-10645 |
| Optional Tests |  asflicense  mvnsite  compile  javac  javadoc  mvninstall  
unit  findbugs  checkstyle  |
| uname | Linux 950f1e8d98bc 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8d06bda |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16233/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16233/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16233/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically 

  1   2   >