[jira] [Commented] (HDFS-12235) Ozone: DeleteKey-3: KSM SCM block deletion message and ACK interactions

2017-09-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16160769#comment-16160769
 ] 

Hadoop QA commented on HDFS-12235:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
42s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
49s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
38s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
38s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
41s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
45s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} hadoop-hdfs-project: The patch generated 0 new + 12 
unchanged - 2 fixed = 12 total (was 14) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
29s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 56s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}135m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.TestReadWhileWriting |
|   | hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.ozone.scm.TestXceiverClientManager |
| Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | 

[jira] [Assigned] (HDFS-12414) Ensure to use CLI command to enable/disable erasure coding policy

2017-09-10 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng reassigned HDFS-12414:


Assignee: SammiChen

> Ensure to use CLI command to enable/disable erasure coding policy
> -
>
> Key: HDFS-12414
> URL: https://issues.apache.org/jira/browse/HDFS-12414
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: SammiChen
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-nice-to-have
>
> Currently, there are two methods for user to enable/disable a erasure coding 
> policy. One is through "dfs.namenode.ec.policies.enabled" property which is a 
> static way to configure the enabled erasure coding policies. Another is 
> through "enableErasureCodingPolicy" or "disabledErasureCodingPolicy" API 
> which can enabled or disable erasure coding policy at runtime. 
> When Namenode restart, there is potential state conflicts between the policy 
> defined in "dfs.namenode.ec.policies.enabled" and policy saved in fsImage. To 
> resolve the conflict and simplify the operation, it's better to use just one 
> way and remove the old method configuring the property.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12395) Support erasure coding policy operations in namenode edit log

2017-09-10 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng reassigned HDFS-12395:


Assignee: SammiChen  (was: Kai Zheng)

> Support erasure coding policy operations in namenode edit log
> -
>
> Key: HDFS-12395
> URL: https://issues.apache.org/jira/browse/HDFS-12395
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Reporter: SammiChen
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Attachments: editsStored, HDFS-12395.001.patch, HDFS-12395.002.patch
>
>
> Support add, remove, disable, enable erasure coding policy operation in edit 
> log. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12372) Document the impact of HDFS-11069 (Tighten the authorization of datanode RPC)

2017-09-10 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-12372:
--
Target Version/s: 2.9.0, 2.8.3, 3.0.0  (was: 2.9.0, 2.8.2, 3.0.0)

> Document the impact of HDFS-11069 (Tighten the authorization of datanode RPC)
> -
>
> Key: HDFS-12372
> URL: https://issues.apache.org/jira/browse/HDFS-12372
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.8.0, 2.9.0, 2.7.4, 3.0.0-alpha2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>
> The idea of HDFS-11069 is good. But it seems to cause confusion for 
> administrators when they issue commands like hdfs diskbalancer, or hdfs 
> dfsadmin, because this change of behavior is not documented properly.
> I suggest we document a recommended way to kinit (e.g. kinit as 
> hdfs/ho...@host1.example.com, rather than h...@example.com), as well as 
> documenting a notice for running privileged DataNode commands in a Kerberized 
> clusters



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11885) createEncryptionZone should not block on initializing EDEK cache

2017-09-10 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-11885:
--
Target Version/s: 2.9.0, 3.0.0-beta1, 2.8.3  (was: 2.9.0, 3.0.0-beta1, 
2.8.2)

> createEncryptionZone should not block on initializing EDEK cache
> 
>
> Key: HDFS-11885
> URL: https://issues.apache.org/jira/browse/HDFS-11885
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 2.6.5
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HDFS-11885.001.patch, HDFS-11885.002.patch, 
> HDFS-11885.003.patch, HDFS-11885.004.patch
>
>
> When creating an encryption zone, we call {{ensureKeyIsInitialized}}, which 
> calls {{provider.warmUpEncryptedKeys(keyName)}}. This is a blocking call, 
> which attempts to fill the key cache up to the low watermark.
> If the KMS is down or slow, this can take a very long time, and cause the 
> createZone RPC to fail with a timeout.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12142) Files may be closed before streamer is done

2017-09-10 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-12142:
--
Target Version/s: 2.8.3  (was: 2.8.2)

> Files may be closed before streamer is done
> ---
>
> Key: HDFS-12142
> URL: https://issues.apache.org/jira/browse/HDFS-12142
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.0
>Reporter: Daryn Sharp
>
> We're encountering multiple cases of clients calling updateBlockForPipeline 
> on completed blocks.  Initial analysis is the client closes a file, 
> completeFile succeeds, then it immediately attempts recovery.  The exception 
> is swallowed on the client, only logged on the NN by checkUCBlock.
> The problem "appears" to be benign (no data loss) but it's unproven if the 
> issue always occurs for successfully closed files.  There appears to be very 
> poor coordination between the dfs output stream's threads which leads to 
> races that confuse the streamer thread – which probably should have been 
> joined before returning from close.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11616) Namenode doesn't mark the block as non-corrupt if the reason for corruption was INVALID_STATE

2017-09-10 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-11616:
--
Target Version/s: 2.8.3  (was: 2.8.1)

> Namenode doesn't mark the block as non-corrupt if the reason for corruption 
> was INVALID_STATE
> -
>
> Key: HDFS-11616
> URL: https://issues.apache.org/jira/browse/HDFS-11616
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.3
>Reporter: Rushabh S Shah
>
> Due to power failure event, we hit HDFS-5042.
> We lost many racks across the cluster.
> There were couple of missing blocks.
> For a  given missing block, following is the output of fsck.
> {noformat}
> [hdfs@XXX rushabhs]$ hdfs fsck -blockId blk_8566436445
> Connecting to namenode via 
> http://nn1:50070/fsck?ugi=hdfs=blk_8566436445+=%2F
> FSCK started by hdfs (auth:KERBEROS_SSL) from XXX at Mon Apr 03 16:22:48 UTC 
> 2017
> Block Id: blk_8566436445
> Block belongs to: 
> No. of Expected Replica: 3
> No. of live Replica: 0
> No. of excess Replica: 0
> No. of stale Replica: 0
> No. of decommissioned Replica: 0
> No. of decommissioning Replica: 0
> No. of corrupted Replica: 3
> Block replica on datanode/rack: datanodeA is CORRUPT   ReasonCode: 
> INVALID_STATE
> Block replica on datanode/rack: datanodeB is CORRUPT   ReasonCode: 
> INVALID_STATE
> Block replica on datanode/rack: datanodeC is CORRUPT   ReasonCode: 
> INVALID_STATE
> {noformat}
> After the power event, when we restarted the datanode, the blocks were in rbw 
> directory.
> When full block report is sent to namenode, all the blocks from rbw directory 
> gets converted into RWR state and the namenode marked it as corrupt with 
> reason Reason.INVALID_STATE.
> After sometime (in this case after 31 hours) when I went to recover missing 
> blocks, I noticed the following things.
> All the datanodes has their copy of the block in rbw directory but the file 
> was complete according to namenode.
> All the replicas had the right size and correct genstamp and {{hdfs debug 
> verify}} command also succeeded.
> I went to dnA and moved the block from rbw directory to finalized directory.
> Restarted the datanode (making sure the replicas file was not present during 
> startup).
> I forced a FBR and made sure the datanode block reported to namenode.
> After waiting for sometime, still that block was missing.
> I expected the missing block to go away since the replica is in FINALIZED 
> directory.
> On investigating more, I found out that namenode will remove the replica from 
> corrupt map only if the reason for corruption was {{GENSTAMP_MISMATCH}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11602) Enable HttpFS Tomcat access logging

2017-09-10 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-11602:
--
Target Version/s: 2.8.3  (was: 2.8.1)

> Enable HttpFS Tomcat access logging
> ---
>
> Key: HDFS-11602
> URL: https://issues.apache.org/jira/browse/HDFS-11602
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> Use Tomcat {{org.apache.catalina.valves.AccessLogValve}} to enable access 
> logging. Verify the solution works with LB or proxy.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12008) Improve the available-space block placement policy

2017-09-10 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-12008:
--
Target Version/s: 2.8.3  (was: 2.8.2)

> Improve the available-space block placement policy
> --
>
> Key: HDFS-12008
> URL: https://issues.apache.org/jira/browse/HDFS-12008
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: block placement
>Affects Versions: 2.8.1
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Attachments: HDFS-12008.patch, HDFS-12008.v2.branch-2.patch, 
> HDFS-12008.v2.trunk.patch, HDFS-12008.v2.trunk.patch, 
> RandomAllocationPolicy.png
>
>
> AvailableSpaceBlockPlacementPolicy currently picks two nodes unconditionally, 
> then picks one node. It could avoid picking the second node when not 
> necessary.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12070) Failed block recovery leaves files open indefinitely and at risk for data loss

2017-09-10 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-12070:
--
Target Version/s: 2.8.3  (was: 2.8.2)

> Failed block recovery leaves files open indefinitely and at risk for data loss
> --
>
> Key: HDFS-12070
> URL: https://issues.apache.org/jira/browse/HDFS-12070
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>
> Files will remain open indefinitely if block recovery fails which creates a 
> high risk of data loss.  The replication monitor will not replicate these 
> blocks.
> The NN provides the primary node a list of candidate nodes for recovery which 
> involves a 2-stage process. The primary node removes any candidates that 
> cannot init replica recovery (essentially alive and knows about the block) to 
> create a sync list.  Stage 2 issues updates to the sync list – _but fails if 
> any node fails_ unlike the first stage.  The NN should be informed of nodes 
> that did succeed.
> Manual recovery will also fail until the problematic node is temporarily 
> stopped so a connection refused will induce the bad node to be pruned from 
> the candidates.  Recovery succeeds, the lease is released, under replication 
> is fixed, and block is invalidated from the bad node.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11617) Datanode should delete the block from rbw directory when it finds duplicate in finalized directory.

2017-09-10 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-11617:
--
Target Version/s: 2.8.3  (was: 2.8.1)

> Datanode should delete the block from rbw directory when it finds duplicate 
> in finalized directory.
> ---
>
> Key: HDFS-11617
> URL: https://issues.apache.org/jira/browse/HDFS-11617
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.3
>Reporter: Rushabh S Shah
>
> Recently we had power failure event and we hit HDFS-5042.
> There were missing blocks but datanode had the copy of the block (and meta 
> file) in rbw directory.
> I manually copied the block and meta file to finalized directory and 
> restarted the datanode.
> But after restart, the block somehow got deleted from the finalized directory.
> So I think the datanode tried to resolve duplicate replicas and in process of 
> resolving it deleted the replica from finalized directory.
> In my opinion, if we have to choose between rbw replica and finalized replica 
> (assuming size and genstamp are same), we should delete rbw replica,  not 
> finalized replica.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12235) Ozone: DeleteKey-3: KSM SCM block deletion message and ACK interactions

2017-09-10 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16160641#comment-16160641
 ] 

Anu Engineer commented on HDFS-12235:
-

+1, v12, pending Jenkins. Thanks for updating the patch.

> Ozone: DeleteKey-3: KSM SCM block deletion message and ACK interactions
> ---
>
> Key: HDFS-12235
> URL: https://issues.apache.org/jira/browse/HDFS-12235
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: ozoneMerge
> Attachments: HDFS-12235-HDFS-7240.001.patch, 
> HDFS-12235-HDFS-7240.002.patch, HDFS-12235-HDFS-7240.003.patch, 
> HDFS-12235-HDFS-7240.004.patch, HDFS-12235-HDFS-7240.005.patch, 
> HDFS-12235-HDFS-7240.006.patch, HDFS-12235-HDFS-7240.007.patch, 
> HDFS-12235-HDFS-7240.008.patch, HDFS-12235-HDFS-7240.009.patch, 
> HDFS-12235-HDFS-7240.010.patch, HDFS-12235-HDFS-7240.011.patch, 
> HDFS-12235-HDFS-7240.012.patch
>
>
> KSM and SCM interaction for delete key operation, both KSM and SCM stores key 
> state info in a backlog, KSM needs to scan this log and send block-deletion 
> command to SCM, once SCM is fully aware of the message, KSM removes the key 
> completely from namespace. See more from the design doc under HDFS-11922, 
> this is task break down 2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7859) Erasure Coding: Persist erasure coding policies in NameNode

2017-09-10 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16160631#comment-16160631
 ] 

Kai Zheng commented on HDFS-7859:
-

Thanks Sammi for the update and great work. In addition to the off-line 
discussion points regarding how to move on this:

bq. There are existing "CacheManagerSection" saves cache directives for 
CacheManager, "SecretManagerSection" saves secrets for SecretManager. So its 
better follow the style, use "ErasureCodingPolicyManagerSection" to save the EC 
policies for ErasureCodingPolicyManager.
Good point. However, if you'd like to see all the existing sections, you can 
see all are concise and not so verbose. For the fsimage definition, the brevity 
should make sense unless it introduces ambiguity. I'd prefer to do the change 
{{ErasureCodingPolicyManagerSection}} => {{ErasureCodingSection}} along with 
related methods (even more lengthy).

> Erasure Coding: Persist erasure coding policies in NameNode
> ---
>
> Key: HDFS-7859
> URL: https://issues.apache.org/jira/browse/HDFS-7859
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-7859.001.patch, HDFS-7859.002.patch, 
> HDFS-7859.004.patch, HDFS-7859.005.patch, HDFS-7859.006.patch, 
> HDFS-7859.007.patch, HDFS-7859.008.patch, HDFS-7859.009.patch, 
> HDFS-7859.010.patch, HDFS-7859.011.patch, HDFS-7859.012.patch, 
> HDFS-7859.013.patch, HDFS-7859.014.patch, HDFS-7859.015.patch, 
> HDFS-7859.016.patch, HDFS-7859-HDFS-7285.002.patch, 
> HDFS-7859-HDFS-7285.002.patch, HDFS-7859-HDFS-7285.003.patch
>
>
> In meetup discussion with [~zhz] and [~jingzhao], it's suggested that we 
> persist EC schemas in NameNode centrally and reliably, so that EC zones can 
> reference them by name efficiently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12395) Support erasure coding policy operations in namenode edit log

2017-09-10 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng reassigned HDFS-12395:


Assignee: Kai Zheng  (was: SammiChen)

> Support erasure coding policy operations in namenode edit log
> -
>
> Key: HDFS-12395
> URL: https://issues.apache.org/jira/browse/HDFS-12395
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Reporter: SammiChen
>Assignee: Kai Zheng
>  Labels: hdfs-ec-3.0-must-do
> Attachments: editsStored, HDFS-12395.001.patch, HDFS-12395.002.patch
>
>
> Support add, remove, disable, enable erasure coding policy operation in edit 
> log. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12235) Ozone: DeleteKey-3: KSM SCM block deletion message and ACK interactions

2017-09-10 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12235:
---
Attachment: HDFS-12235-HDFS-7240.012.patch

> Ozone: DeleteKey-3: KSM SCM block deletion message and ACK interactions
> ---
>
> Key: HDFS-12235
> URL: https://issues.apache.org/jira/browse/HDFS-12235
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: ozoneMerge
> Attachments: HDFS-12235-HDFS-7240.001.patch, 
> HDFS-12235-HDFS-7240.002.patch, HDFS-12235-HDFS-7240.003.patch, 
> HDFS-12235-HDFS-7240.004.patch, HDFS-12235-HDFS-7240.005.patch, 
> HDFS-12235-HDFS-7240.006.patch, HDFS-12235-HDFS-7240.007.patch, 
> HDFS-12235-HDFS-7240.008.patch, HDFS-12235-HDFS-7240.009.patch, 
> HDFS-12235-HDFS-7240.010.patch, HDFS-12235-HDFS-7240.011.patch, 
> HDFS-12235-HDFS-7240.012.patch
>
>
> KSM and SCM interaction for delete key operation, both KSM and SCM stores key 
> state info in a backlog, KSM needs to scan this log and send block-deletion 
> command to SCM, once SCM is fully aware of the message, KSM removes the key 
> completely from namespace. See more from the design doc under HDFS-11922, 
> this is task break down 2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11676) Ozone: SCM CLI: Implement close container command

2017-09-10 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16160618#comment-16160618
 ] 

Yiqun Lin edited comment on HDFS-11676 at 9/11/17 3:35 AM:
---

Thanks for the working, [~vagarychen]. The initial patch looks good overall. 
Only two minor comments:

*CloseContainerHandler.java*
line 27: {{Expecting container create}} should be {{Expecting container close}}.

*ContainerOperationClient.java*
line 293: I am thinking if we can use state machine like 
{{ContainerOperationClient#createContainer}} did, that seems better than 
directly update the container state. There is already {{CLOSED}} state and 
corresponding event type defined in state machine. After that, we can reuse 
{{Mapping#updateContainerState}} for updating the state.

In addition, can you help check this failure test 
{{hadoop.ozone.scm.node.TestNodeManager}}?
Please fix findbugs, ASF and checkstyle warning in your next patch. Thanks.


was (Author: linyiqun):
Thanks for the working, [~vagarychen]. The initial patch looks good overall. 
Only two minor comments:

*CloseContainerHandler.java*
line 27: {{Expecting container create}} should be {{Expecting container close}}.

*ContainerOperationClient.java*
line 293: I am thinking if we can use state machine, that seems better than 
directly update the container state. There is already {{CLOSED}} state and 
corresponding event type defined in state machine. After that, we can reuse 
{{Mapping#updateContainerState}} for updating the state.

In addition, can you help check this failure test 
{{hadoop.ozone.scm.node.TestNodeManager}}?
Please fix findbugs, ASF and checkstyle warning in your next patch. Thanks.

> Ozone: SCM CLI: Implement close container command
> -
>
> Key: HDFS-11676
> URL: https://issues.apache.org/jira/browse/HDFS-11676
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Chen Liang
>  Labels: ozoneMerge, tocheck
> Attachments: HDFS-11676-HDFS-7240.001.patch
>
>
> Implement close container command
> {code}
> hdfs scm -container close 
> {code}
> This command connects to SCM and closes a container. Once the container is 
> closed in the SCM, the corresponding container is closed at the appropriate 
> datanode. if the container does not exist, it will return an error.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11676) Ozone: SCM CLI: Implement close container command

2017-09-10 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16160618#comment-16160618
 ] 

Yiqun Lin commented on HDFS-11676:
--

Thanks for the working, [~vagarychen]. The initial patch looks good overall. 
Only two minor comments:

*CloseContainerHandler.java*
line 27: {{Expecting container create}} should be {{Expecting container close}}.

*ContainerOperationClient.java*
line 293: I am thinking if we can use state machine, that seems better than 
directly update the container state. There is already {{CLOSED}} state and 
corresponding event type defined in state machine. After that, we can reuse 
{{Mapping#updateContainerState}} for updating the state.

In addition, can you help check this failure test 
{{hadoop.ozone.scm.node.TestNodeManager}}?
Please fix findbugs, ASF and checkstyle warning in your next patch. Thanks.

> Ozone: SCM CLI: Implement close container command
> -
>
> Key: HDFS-11676
> URL: https://issues.apache.org/jira/browse/HDFS-11676
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Chen Liang
>  Labels: ozoneMerge, tocheck
> Attachments: HDFS-11676-HDFS-7240.001.patch
>
>
> Implement close container command
> {code}
> hdfs scm -container close 
> {code}
> This command connects to SCM and closes a container. Once the container is 
> closed in the SCM, the corresponding container is closed at the appropriate 
> datanode. if the container does not exist, it will return an error.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12395) Support erasure coding policy operations in namenode edit log

2017-09-10 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16160615#comment-16160615
 ] 

Kai Zheng commented on HDFS-12395:
--

Thanks [~Sammi] for working on this. Having looked into the work and it looks 
overall close.

1. Overall, similar to HDFS-7859, this exposes lots of not-so-relevant changes 
already made in the patch or we need to make, so please feel free to open new 
issues to hold such changes separately. The essential changes for this issue is 
to *log add/remove/enable/disable erasure coding policy*. 

2. You local change in {{pom.xml}} and {{editsStored.xml}}.

3. You changes in {{DFSClient}} looks like some bug fix to existing codes.

4. Refactor: {{getEcPolicy}} => {{getErasureCodingPolicy}}; 
{{AddECPolicyResponse}} => {{AddErasureCodingPolicyResponse}}

5. Don't quite like the way to sort the map by creating a tree map. And also, 
could we improve {{ECSchema}} to ensure {{extraOptions}} is sorted already, so 
we don't need to consider doing it in every places? If you'd do this, please in 
separate issue. 
{code}
+  // Sort extra options based on key
+  extraOptions = new TreeMap(extraOptions);
{code}

6. Please use some non-meaningful options for the test purpose to avoid 
possible confusion, like "testOption1" or the like.
{code}
+Map extraOptions = new HashMap();
+extraOptions.put("padding", "0");
+extraOptions.put("recycle", "true");
{code}

7. Please try to use the same order when dump fields of erasure coding policy.
{code}
+  public static void writeErasureCodingPolicy(DataOutputStream out,
+  ErasureCodingPolicy ecPolicy) throws IOException {
+writeInt(ecPolicy.getCellSize(), out);
+writeString(ecPolicy.getSchema().getCodecName(), out);
+writeInt(ecPolicy.getNumDataUnits(), out);
+writeInt(ecPolicy.getNumParityUnits(), out);
}
{code}
{code}
+  XMLUtils.addSaxString(contentHandler, "CODEC", ecPolicy.getCodecName());
+  XMLUtils.addSaxString(contentHandler, "CELLSIZE",
+  Integer.toString(ecPolicy.getCellSize()));
+  XMLUtils.addSaxString(contentHandler, "DATAUNITS",
+  Integer.toString(ecPolicy.getNumDataUnits()));
+  XMLUtils.addSaxString(contentHandler, "PARITYUNITS",
+  Integer.toString(ecPolicy.getNumParityUnits()));
{code}

8. Not sure why we need to catch and convert the exception here, but not in 
other places. Better to pass the {{e}} instead of its {e.getMessage()} when 
convert to new IOException. 
{code}
+case OP_ADD_ERASURE_CODING_POLICY:
+  AddErasureCodingPolicyOp addOp = (AddErasureCodingPolicyOp) op;
+  try {
+fsNamesys.getErasureCodingPolicyManager().addPolicy(
+addOp.getEcPolicy());
+  } catch (HadoopIllegalArgumentException e) {
+throw new IOException("Add erasure coding policy failed for:" +
+e.getMessage());
+  }
{code}

> Support erasure coding policy operations in namenode edit log
> -
>
> Key: HDFS-12395
> URL: https://issues.apache.org/jira/browse/HDFS-12395
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Reporter: SammiChen
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Attachments: editsStored, HDFS-12395.001.patch, HDFS-12395.002.patch
>
>
> Support add, remove, disable, enable erasure coding policy operation in edit 
> log. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12235) Ozone: DeleteKey-3: KSM SCM block deletion message and ACK interactions

2017-09-10 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16160611#comment-16160611
 ] 

Weiwei Yang commented on HDFS-12235:


Hi [~anu]

Thank you.

bq. BlockManagerImpl.java, deleteBlocks – is there a reason why we removed the 
chill mode check?

I don't think that was an intentional change, I will add that back. Thanks for 
catching this. I will fix this in v12 patch. If v12 patch gives me a clean 
jenkins report, do you think I can commit this today? I want to get this done 
soon because there is some other patches depending on this.

Thanks!

> Ozone: DeleteKey-3: KSM SCM block deletion message and ACK interactions
> ---
>
> Key: HDFS-12235
> URL: https://issues.apache.org/jira/browse/HDFS-12235
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: ozoneMerge
> Attachments: HDFS-12235-HDFS-7240.001.patch, 
> HDFS-12235-HDFS-7240.002.patch, HDFS-12235-HDFS-7240.003.patch, 
> HDFS-12235-HDFS-7240.004.patch, HDFS-12235-HDFS-7240.005.patch, 
> HDFS-12235-HDFS-7240.006.patch, HDFS-12235-HDFS-7240.007.patch, 
> HDFS-12235-HDFS-7240.008.patch, HDFS-12235-HDFS-7240.009.patch, 
> HDFS-12235-HDFS-7240.010.patch, HDFS-12235-HDFS-7240.011.patch
>
>
> KSM and SCM interaction for delete key operation, both KSM and SCM stores key 
> state info in a backlog, KSM needs to scan this log and send block-deletion 
> command to SCM, once SCM is fully aware of the message, KSM removes the key 
> completely from namespace. See more from the design doc under HDFS-11922, 
> this is task break down 2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12235) Ozone: DeleteKey-3: KSM SCM block deletion message and ACK interactions

2017-09-10 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16160607#comment-16160607
 ] 

Anu Engineer edited comment on HDFS-12235 at 9/11/17 3:16 AM:
--

+1, for v11. Thanks for getting this done. Please go ahead and commit. 2 Minor 
comments:

1. The blockManagerImpl code with change a bit with the Ratis pipeline work, so 
I will rebase my work over this patch once this is committed.

2. BlockManagerImpl.java, deleteBlocks -- is there a reason why we removed the 
chill mode check? It is not important, but just wondering if we need that at 
all?

cc: [~xyao]


was (Author: anu):
+1, for v11. Thanks for getting this done. Please go ahead and commit. 2 Minor 
comments:

1. The blockManagerImpl code with change a bit with the Ratis pipeline work, so 
I will rebase my work over this patch once this is committed.

2. BlockManagerImpl.java, deleteBlocks -- is there a reason why we removed the 
chill mode check? It is not important, but just wondering if we need that at 
all?


> Ozone: DeleteKey-3: KSM SCM block deletion message and ACK interactions
> ---
>
> Key: HDFS-12235
> URL: https://issues.apache.org/jira/browse/HDFS-12235
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: ozoneMerge
> Attachments: HDFS-12235-HDFS-7240.001.patch, 
> HDFS-12235-HDFS-7240.002.patch, HDFS-12235-HDFS-7240.003.patch, 
> HDFS-12235-HDFS-7240.004.patch, HDFS-12235-HDFS-7240.005.patch, 
> HDFS-12235-HDFS-7240.006.patch, HDFS-12235-HDFS-7240.007.patch, 
> HDFS-12235-HDFS-7240.008.patch, HDFS-12235-HDFS-7240.009.patch, 
> HDFS-12235-HDFS-7240.010.patch, HDFS-12235-HDFS-7240.011.patch
>
>
> KSM and SCM interaction for delete key operation, both KSM and SCM stores key 
> state info in a backlog, KSM needs to scan this log and send block-deletion 
> command to SCM, once SCM is fully aware of the message, KSM removes the key 
> completely from namespace. See more from the design doc under HDFS-11922, 
> this is task break down 2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8439) Adding more slow action log in critical read path

2017-09-10 Thread Wang, Xinglong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16160603#comment-16160603
 ] 

Wang, Xinglong commented on HDFS-8439:
--

This is very interesting feature for hbase, do we have plan to patch this?

> Adding more slow action log in critical read path
> -
>
> Key: HDFS-8439
> URL: https://issues.apache.org/jira/browse/HDFS-8439
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
> Attachments: HDFS-8439.001.patch, HDFS-8439.002.patch, 
> HDFS-8439.003.patch
>
>
> To dig a HBase read spike issue, we'd better to add more slow pread/seek log 
> in read flow to get the abnormal datanodes.
> Patch will be uploaded soon.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12235) Ozone: DeleteKey-3: KSM SCM block deletion message and ACK interactions

2017-09-10 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16160607#comment-16160607
 ] 

Anu Engineer commented on HDFS-12235:
-

+1, for v11. Thanks for getting this done. Please go ahead and commit. 2 Minor 
comments:

1. The blockManagerImpl code with change a bit with the Ratis pipeline work, so 
I will rebase my work over this patch once this is committed.

2. BlockManagerImpl.java, deleteBlocks -- is there a reason why we removed the 
chill mode check? It is not important, but just wondering if we need that at 
all?


> Ozone: DeleteKey-3: KSM SCM block deletion message and ACK interactions
> ---
>
> Key: HDFS-12235
> URL: https://issues.apache.org/jira/browse/HDFS-12235
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: ozoneMerge
> Attachments: HDFS-12235-HDFS-7240.001.patch, 
> HDFS-12235-HDFS-7240.002.patch, HDFS-12235-HDFS-7240.003.patch, 
> HDFS-12235-HDFS-7240.004.patch, HDFS-12235-HDFS-7240.005.patch, 
> HDFS-12235-HDFS-7240.006.patch, HDFS-12235-HDFS-7240.007.patch, 
> HDFS-12235-HDFS-7240.008.patch, HDFS-12235-HDFS-7240.009.patch, 
> HDFS-12235-HDFS-7240.010.patch, HDFS-12235-HDFS-7240.011.patch
>
>
> KSM and SCM interaction for delete key operation, both KSM and SCM stores key 
> state info in a backlog, KSM needs to scan this log and send block-deletion 
> command to SCM, once SCM is fully aware of the message, KSM removes the key 
> completely from namespace. See more from the design doc under HDFS-11922, 
> this is task break down 2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12414) Ensure to use CLI command to enable/disable erasure coding policy

2017-09-10 Thread SammiChen (JIRA)
SammiChen created HDFS-12414:


 Summary: Ensure to use CLI command to enable/disable erasure 
coding policy
 Key: HDFS-12414
 URL: https://issues.apache.org/jira/browse/HDFS-12414
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: SammiChen


Currently, there are two methods for user to enable/disable a erasure coding 
policy. One is through "dfs.namenode.ec.policies.enabled" property which is a 
static way to configure the enabled erasure coding policies. Another is through 
"enableErasureCodingPolicy" or "disabledErasureCodingPolicy" API which can 
enabled or disable erasure coding policy at runtime. 
When Namenode restart, there is potential state conflicts between the policy 
defined in "dfs.namenode.ec.policies.enabled" and policy saved in fsImage. To 
resolve the conflict and simplify the operation, it's better to use just one 
way and remove the old method configuring the property.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7878) API - expose an unique file identifier

2017-09-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16160598#comment-16160598
 ] 

Hadoop QA commented on HDFS-7878:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
24s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
8s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 13m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
37s{color} | {color:green} root generated 0 new + 1281 unchanged - 2 fixed = 
1281 total (was 1283) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 27s{color} | {color:orange} root: The patch generated 1 new + 446 unchanged 
- 4 fixed = 447 total (was 450) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
7s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
37s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}110m 19s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
15s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}210m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 |
|   | hadoop.hdfs.server.blockmanagement.TestBlockManager |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
|   | 

[jira] [Assigned] (HDFS-12413) Inotify should support erasure coding policy op as replica meta change

2017-09-10 Thread Huafeng Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huafeng Wang reassigned HDFS-12413:
---

Assignee: Huafeng Wang

> Inotify should support erasure coding policy op as replica meta change
> --
>
> Key: HDFS-12413
> URL: https://issues.apache.org/jira/browse/HDFS-12413
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Kai Zheng
>Assignee: Huafeng Wang
>
> Currently HDFS Inotify already supports meta change like replica for a file. 
> We should also support erasure coding policy setting/unsetting for a file 
> similarly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12413) Inotify should support erasure coding policy op as replica meta change

2017-09-10 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16160586#comment-16160586
 ] 

Kai Zheng commented on HDFS-12413:
--

Would be great if we could get this done before 3.0 GA.

> Inotify should support erasure coding policy op as replica meta change
> --
>
> Key: HDFS-12413
> URL: https://issues.apache.org/jira/browse/HDFS-12413
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Kai Zheng
>
> Currently HDFS Inotify already supports meta change like replica for a file. 
> We should also support erasure coding policy setting/unsetting for a file 
> similarly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12413) Inotify should support erasure coding policy op as replica meta change

2017-09-10 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16160583#comment-16160583
 ] 

Kai Zheng commented on HDFS-12413:
--

Ping [~HuafengWang], I know you're familiar with HDFS inotify feature, would 
you take and work on this one? Thanks!

> Inotify should support erasure coding policy op as replica meta change
> --
>
> Key: HDFS-12413
> URL: https://issues.apache.org/jira/browse/HDFS-12413
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Kai Zheng
>
> Currently HDFS Inotify already supports meta change like replica for a file. 
> We should also support erasure coding policy setting/unsetting for a file 
> similarly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12413) Inotify should support erasure coding policy op as replica meta change

2017-09-10 Thread Kai Zheng (JIRA)
Kai Zheng created HDFS-12413:


 Summary: Inotify should support erasure coding policy op as 
replica meta change
 Key: HDFS-12413
 URL: https://issues.apache.org/jira/browse/HDFS-12413
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: erasure-coding
Reporter: Kai Zheng


Currently HDFS Inotify already supports meta change like replica for a file. We 
should also support erasure coding policy setting/unsetting for a file 
similarly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12398) Use JUnit Paramaterized test suite in TestWriteReadStripedFile

2017-09-10 Thread Huafeng Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16160561#comment-16160561
 ] 

Huafeng Wang commented on HDFS-12398:
-

Hi [~drankye], very thanks for your review.
{quote}
1. The current way of having many test methods are much better readable;
{quote}
It's true, I can add some comments on the parameters if you wish. But I think 
currently the ec file names also can tell what the test is doing.
{quote}
2. It's also easier to debug if some of them are failed;
{quote}
It's also true and it's the limitation of junit Parameterized. 
{quote}
3. More important, every test case (contained in a test method) needs a brand 
new cluster to start with;
{quote}
It's intended because in each test, it will randomly kill a datanode so start 
with a new cluster is needed.
{quote}
4. Timeout can be fine-tuned for each test method in current way.
{quote}
It's not true, before the refactor, the timeout is controlled by
{code}
@Rule
public Timeout globalTimeout = new Timeout(30)
{code}
which applies the same timeout to all test methods in a class. 


> Use JUnit Paramaterized test suite in TestWriteReadStripedFile
> --
>
> Key: HDFS-12398
> URL: https://issues.apache.org/jira/browse/HDFS-12398
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Reporter: Huafeng Wang
>Assignee: Huafeng Wang
>Priority: Trivial
> Attachments: HDFS-12398.001.patch, HDFS-12398.002.patch
>
>
> The TestWriteReadStripedFile is basically doing the full product of file size 
> with data node failure or not. It's better to use JUnit Paramaterized test 
> suite.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12401) Ozone: TestBlockDeletingService#testBlockDeletionTimeout sometimes timeout

2017-09-10 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16160548#comment-16160548
 ] 

Weiwei Yang edited comment on HDFS-12401 at 9/11/17 2:00 AM:
-

Hi [~xyao]

Do you have the jenkins report when this happens? I could not get this 
reproduced on my local environment. Reduce the priority as this seems to be an 
intermittent UT failure which doesn't for sure mean a code bug. Thank you.


was (Author: cheersyang):
Hi [~xyao]

Do you have the jenkins report when this happens? I could not get this 
reproduced on my local environment. Reduce the priority as this seems to be an 
intermittent UT failure which doesn't necessarily reports a code bug. Thank you.

> Ozone: TestBlockDeletingService#testBlockDeletionTimeout sometimes timeout
> --
>
> Key: HDFS-12401
> URL: https://issues.apache.org/jira/browse/HDFS-12401
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Weiwei Yang
>Priority: Minor
>
> {code}
> testBlockDeletionTimeout(org.apache.hadoop.ozone.container.common.TestBlockDeletingService)
>   Time elapsed: 100.383 sec  <<< ERROR!
> java.util.concurrent.TimeoutException: Timed out waiting for condition. 
> Thread diagnostics:
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12401) Ozone: TestBlockDeletingService#testBlockDeletionTimeout sometimes timeout

2017-09-10 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16160548#comment-16160548
 ] 

Weiwei Yang edited comment on HDFS-12401 at 9/11/17 1:59 AM:
-

Hi [~xyao]

Do you have the jenkins report when this happens? I could not get this 
reproduced on my local environment. Reduce the priority as this seems to be an 
intermittent UT failure which doesn't necessarily reports a code bug. Thank you.


was (Author: cheersyang):
Hi [~xyao]

Do you have the jenkins report when this happens? I could not get this 
reproduced on my local environment. Thank you.

> Ozone: TestBlockDeletingService#testBlockDeletionTimeout sometimes timeout
> --
>
> Key: HDFS-12401
> URL: https://issues.apache.org/jira/browse/HDFS-12401
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Weiwei Yang
>Priority: Minor
>
> {code}
> testBlockDeletionTimeout(org.apache.hadoop.ozone.container.common.TestBlockDeletingService)
>   Time elapsed: 100.383 sec  <<< ERROR!
> java.util.concurrent.TimeoutException: Timed out waiting for condition. 
> Thread diagnostics:
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12401) Ozone: TestBlockDeletingService#testBlockDeletionTimeout sometimes timeout

2017-09-10 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12401:
---
Priority: Minor  (was: Major)

> Ozone: TestBlockDeletingService#testBlockDeletionTimeout sometimes timeout
> --
>
> Key: HDFS-12401
> URL: https://issues.apache.org/jira/browse/HDFS-12401
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Weiwei Yang
>Priority: Minor
>
> {code}
> testBlockDeletionTimeout(org.apache.hadoop.ozone.container.common.TestBlockDeletingService)
>   Time elapsed: 100.383 sec  <<< ERROR!
> java.util.concurrent.TimeoutException: Timed out waiting for condition. 
> Thread diagnostics:
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12401) Ozone: TestBlockDeletingService#testBlockDeletionTimeout sometimes timeout

2017-09-10 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16160548#comment-16160548
 ] 

Weiwei Yang commented on HDFS-12401:


Hi [~xyao]

Do you have the jenkins report when this happens? I could not get this 
reproduced on my local environment. Thank you.

> Ozone: TestBlockDeletingService#testBlockDeletionTimeout sometimes timeout
> --
>
> Key: HDFS-12401
> URL: https://issues.apache.org/jira/browse/HDFS-12401
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Weiwei Yang
>
> {code}
> testBlockDeletionTimeout(org.apache.hadoop.ozone.container.common.TestBlockDeletingService)
>   Time elapsed: 100.383 sec  <<< ERROR!
> java.util.concurrent.TimeoutException: Timed out waiting for condition. 
> Thread diagnostics:
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12397) Ozone: KSM: multiple delete methods in KSMMetadataManager

2017-09-10 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12397:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-7240
   Status: Resolved  (was: Patch Available)

I've committed this to the feature branch, thanks for fixing this 
[~nandakumar131].

> Ozone: KSM: multiple delete methods in KSMMetadataManager
> -
>
> Key: HDFS-12397
> URL: https://issues.apache.org/jira/browse/HDFS-12397
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
>Priority: Minor
>  Labels: ozoneMerge
> Fix For: HDFS-7240
>
> Attachments: HDFS-12397-HDFS-7240.000.patch
>
>
> {{KSMMetadataManager}} has two delete methods which does the same thing.
> * {{void delete(byte[] key) throws IOException}}
> * {{void deleteKey(byte[] key) throws IOException}}
> One can be removed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11924) FSPermissionChecker.checkTraverse doesn't pass FsAction access properly

2017-09-10 Thread Gavin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gavin updated HDFS-11924:
-
Reporter: Zsombor Gegesy  (was: Zsombor Gegesy)

> FSPermissionChecker.checkTraverse doesn't pass FsAction access properly
> ---
>
> Key: HDFS-11924
> URL: https://issues.apache.org/jira/browse/HDFS-11924
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Zsombor Gegesy
>  Labels: hdfs, hdfspermission
> Attachments: 
> 0001-HDFS-11924-Pass-FsAction-to-the-external-AccessContr.patch
>
>
> In 2.7.1, during file access check, the AccessControlEnforcer is called with 
> the access parameter filled with FsAction values.
> A thread dump in this case:
> {code}
>   FSPermissionChecker.checkPermission(INodesInPath, boolean, FsAction, 
> FsAction, FsAction, FsAction, boolean) line: 189   
>   FSDirectory.checkPermission(FSPermissionChecker, INodesInPath, boolean, 
> FsAction, FsAction, FsAction, FsAction, boolean) line: 1698 
>   FSDirectory.checkPermission(FSPermissionChecker, INodesInPath, boolean, 
> FsAction, FsAction, FsAction, FsAction) line: 1682  
>   FSDirectory.checkPathAccess(FSPermissionChecker, INodesInPath, 
> FsAction) line: 1656 
>   FSNamesystem.appendFileInternal(FSPermissionChecker, INodesInPath, 
> String, String, boolean, boolean) line: 2668 
>   FSNamesystem.appendFileInt(String, String, String, boolean, boolean) 
> line: 2985 
>   FSNamesystem.appendFile(String, String, String, EnumSet, 
> boolean) line: 2952
>   NameNodeRpcServer.append(String, String, EnumSetWritable) 
> line: 653 
>   ClientNamenodeProtocolServerSideTranslatorPB.append(RpcController, 
> ClientNamenodeProtocolProtos$AppendRequestProto) line: 421   
>   
> ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(Descriptors$MethodDescriptor,
>  RpcController, Message) line: not available  
>   ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(RPC$Server, String, 
> Writable, long) line: 616  
>   ProtobufRpcEngine$Server(RPC$Server).call(RPC$RpcKind, String, 
> Writable, long) line: 969
>   Server$Handler$1.run() line: 2049   
>   Server$Handler$1.run() line: 2045   
>   AccessController.doPrivileged(PrivilegedExceptionAction, 
> AccessControlContext) line: not available [native method]   
>   Subject.doAs(Subject, PrivilegedExceptionAction) line: 422   
>   UserGroupInformation.doAs(PrivilegedExceptionAction) line: 1657  
> {code}
> However, in 2.8.0 this value is changed to null, because in 
> FSPermissionChecker.checkTraverse(FSPermissionChecker pc, INodesInPath iip, 
> boolean resolveLink) couldn't pass the required information, so it's simply 
> use 'null'.
> This is a regression between 2.7.1 and 2.8.0, because external 
> AccessControlEnforcer couldn't work properly



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12397) Ozone: KSM: multiple delete methods in KSMMetadataManager

2017-09-10 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12397:
---
Priority: Minor  (was: Major)

> Ozone: KSM: multiple delete methods in KSMMetadataManager
> -
>
> Key: HDFS-12397
> URL: https://issues.apache.org/jira/browse/HDFS-12397
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
>Priority: Minor
>  Labels: ozoneMerge
> Attachments: HDFS-12397-HDFS-7240.000.patch
>
>
> {{KSMMetadataManager}} has two delete methods which does the same thing.
> * {{void delete(byte[] key) throws IOException}}
> * {{void deleteKey(byte[] key) throws IOException}}
> One can be removed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12397) Ozone: KSM: multiple delete methods in KSMMetadataManager

2017-09-10 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16160539#comment-16160539
 ] 

Weiwei Yang commented on HDFS-12397:


+1, I will commit this shortly.

> Ozone: KSM: multiple delete methods in KSMMetadataManager
> -
>
> Key: HDFS-12397
> URL: https://issues.apache.org/jira/browse/HDFS-12397
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
>  Labels: ozoneMerge
> Attachments: HDFS-12397-HDFS-7240.000.patch
>
>
> {{KSMMetadataManager}} has two delete methods which does the same thing.
> * {{void delete(byte[] key) throws IOException}}
> * {{void deleteKey(byte[] key) throws IOException}}
> One can be removed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-7878) API - expose an unique file identifier

2017-09-10 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-7878:

Attachment: HDFS-7878.12.patch

> API - expose an unique file identifier
> --
>
> Key: HDFS-7878
> URL: https://issues.apache.org/jira/browse/HDFS-7878
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7878.01.patch, HDFS-7878.02.patch, 
> HDFS-7878.03.patch, HDFS-7878.04.patch, HDFS-7878.05.patch, 
> HDFS-7878.06.patch, HDFS-7878.07.patch, HDFS-7878.08.patch, 
> HDFS-7878.09.patch, HDFS-7878.10.patch, HDFS-7878.11.patch, 
> HDFS-7878.12.patch, HDFS-7878.patch
>
>
> See HDFS-487.
> Even though that is resolved as duplicate, the ID is actually not exposed by 
> the JIRA it supposedly duplicates.
> INode ID for the file should be easy to expose; alternatively ID could be 
> derived from block IDs, to account for appends...
> This is useful e.g. for cache key by file, to make sure cache stays correct 
> when file is overwritten.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12384) Fixing compilation issue with BanDuplicateClasses

2017-09-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16160453#comment-16160453
 ] 

Íñigo Goiri commented on HDFS-12384:


Thanks [~brahmareddy]. I tentatively committed to unblock the broken build.
If there is a better fix (or some additional tweak), I'd reopen.

> Fixing compilation issue with BanDuplicateClasses
> -
>
> Key: HDFS-12384
> URL: https://issues.apache.org/jira/browse/HDFS-12384
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Fix For: HDFS-10467
>
> Attachments: HDFS-12384-HDFS-10467-000.patch, 
> HDFS-12384-HDFS-10467-001.patch, HDFS-12384-HDFS-10467-002.patch, 
> HDFS-12384-HDFS-10467-003.patch, HDFS-12384-HDFS-10467-004.patch, 
> HDFS-12384-HDFS-10467-005.patch
>
>
> Build is failing because of changes in {{ClientProtocol}} and dependencies 
> from {{CuratorManager}} indirectly added to {{hadoop-client-modules}}:
> {code}
> [INFO]   Adding ignore: *
> [WARNING] Rule 1: org.apache.maven.plugins.enforcer.BanDuplicateClasses 
> failed with message:
> Duplicate classes found:
>   Found in:
> 
> org.apache.hadoop:hadoop-client-minicluster:jar:3.0.0-beta1-SNAPSHOT:compile
> org.apache.hadoop:hadoop-client-runtime:jar:3.0.0-beta1-SNAPSHOT:compile
>   Duplicate classes:
> 
> org/apache/hadoop/shaded/org/apache/curator/framework/api/DeleteBuilder.class
> 
> org/apache/hadoop/shaded/org/apache/curator/framework/CuratorFramework.class
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7878) API - expose an unique file identifier

2017-09-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16160256#comment-16160256
 ] 

Hadoop QA commented on HDFS-7878:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
16s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
47s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 10m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
50s{color} | {color:green} root generated 0 new + 1281 unchanged - 2 fixed = 
1281 total (was 1283) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  5s{color} | {color:orange} root: The patch generated 6 new + 446 unchanged 
- 4 fixed = 452 total (was 450) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
35s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
21s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m  6s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
41s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}181m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestFileStatusSerialization |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.blockmanagement.TestBlockManager |
|   | hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030 |
|