[jira] [Commented] (HDFS-12310) [SPS]: Provide an option to track the status of in progress requests

2017-10-20 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16213734#comment-16213734
 ] 

Surendra Singh Lilhore commented on HDFS-12310:
---

Thanks [~eddyxu] for review.. I will update the patch soon..

> [SPS]: Provide an option to track the status of in progress requests
> 
>
> Key: HDFS-12310
> URL: https://issues.apache.org/jira/browse/HDFS-12310
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Uma Maheswara Rao G
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-12310-HDFS-10285-01.patch, 
> HDFS-12310-HDFS-10285-02.patch, HDFS-12310-HDFS-10285-03.patch
>
>
> As per the [~andrew.wang] 's review comments in HDFS-10285, This is the JIRA 
> for tracking about the options how we track the progress of SPS requests.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12681) Fold HdfsLocatedFileStatus into HdfsFileStatus

2017-10-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16213721#comment-16213721
 ] 

Hadoop QA commented on HDFS-12681:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
43s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m  
1s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 25s{color} | {color:orange} root: The patch generated 69 new + 630 unchanged 
- 10 fixed = 699 total (was 640) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 25s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
19s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 2 new 
+ 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m  
3s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
49s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}122m 12s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}248m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  org.apache.hadoop.hdfs.protocol.HdfsFileStatus$Builder.path(byte[]) may 
expose internal representation by storing an externally mutable object into 
HdfsFileStatus$Builder.path  At HdfsFileStatus.java:by storing an externally 
mutable object into HdfsFileStatus$Builder.path  At HdfsFileStatus.java:[line 
459] |
|  |  org.apache.hadoop.hdfs.protocol.HdfsF

[jira] [Commented] (HDFS-7878) API - expose an unique file identifier

2017-10-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16213707#comment-16213707
 ] 

Hadoop QA commented on HDFS-7878:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 12m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
11s{color} | {color:green} root: The patch generated 0 new + 346 unchanged - 2 
fixed = 346 total (was 348) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 50s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
41s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
57s{color} | {color:red} hadoop-common-project_hadoop-common generated 2 new + 
0 unchanged - 0 fixed = 2 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
14s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
18s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 20s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}187m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
|   | hadoop.hdfs.server.federation.metrics.TestFederationMetrics |
|   | hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:ca8ddc6 |
| JIRA Issue |

[jira] [Commented] (HDFS-12683) DFSZKFailOverController re-order logic for logging Exception

2017-10-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16213696#comment-16213696
 ] 

Hadoop QA commented on HDFS-12683:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m  
3s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 41s{color} | {color:orange} root: The patch generated 1 new + 44 unchanged - 
0 fixed = 45 total (was 44) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
7m 41s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m 51s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 90m 44s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}165m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestShellBasedUnixGroupsMapping |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:ca8ddc6 |
| JIRA Issue | HDFS-12683 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12893369/HDFS-12683.09.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checksty

[jira] [Commented] (HDFS-11467) Support ErasureCoding section in OIV XML/ReverseXML

2017-10-20 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16213677#comment-16213677
 ] 

Xiao Chen commented on HDFS-11467:
--

Thanks for working on this [~HuafengWang], and others for reviewing. Patch 
looks good in general, some comments:
- {{PBHelperClient.convertErasureCodingPolicyFully}}: is this added to persist 
{{ErasueCodingPolicyState}}? HDFS-12682 / HDFS-12686 would take care of this if 
so.
- We need fuller test cases. Suggest to look at the cases in HDFS-12395 and 
apply similar coverage here. Looks to me we need: add->enable, enable->disable, 
and some combinations with remove. From fsimage's perspective this sounds 
indifferent than some simpler cases, but in the past we have seen issues where 
some operations are handled by edits, but not fsimage. So I think better 
coverage is safer.
- Let's not touch {{TestOfflineImageViewerForAcl}} for cleanness.


> Support ErasureCoding section in OIV XML/ReverseXML
> ---
>
> Key: HDFS-11467
> URL: https://issues.apache.org/jira/browse/HDFS-11467
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: tools
>Affects Versions: 3.0.0-alpha4
>Reporter: Wei-Chiu Chuang
>Assignee: Huafeng Wang
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11467.001.patch, HDFS-11467.002.patch
>
>
> As discussed in HDFS-7859, after ErasureCoding section is added into fsimage, 
> we would like to also support exporting this section into an XML back and 
> forth using the OIV tool.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12665) [AliasMap] Create a version of the AliasMap that runs in memory in the Namenode (leveldb)

2017-10-20 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16213639#comment-16213639
 ] 

Virajith Jalaparti commented on HDFS-12665:
---

Thanks for posting this [~ehiggs]. A few comments/questions:
# Can you please add javadocs for all the new classes added?
# Is there a reason to refactor {{FileRegion}} and introduce 
{{ProvidedStorageLocation}}? Also, I think the name {{ProvidedStorageLocation}} 
is confusing given there is also a {{StorageLocation}}, which is something very 
different. May be rename to {{ProvidedLocation}}. 
# The new {{AliasMap}} class has a confusing name. It is supposed to be an 
implementation of the {{AliasMapProtocol}} but the name is a prefix of the 
latter. 
# Renaming {{LevelDBAliasMapClient}} to something along the lines 
{{InMemoryLevelDBAliasMap}} will make it a more descriptive name for the class. 
In general, adding a similar prefix to {{AliasMapProtocol}}, 
{{LevelDBAliasMapServer}} will improve the readability of the code.
# Can we move {{LevelDBAliasMapClient}} to the 
{{org.apache.hadoop.hdfs.server.common.BlockAliasMapImpl}} package. On a 
related note, we should rename this to 
{{org.apache.hadoop.hdfs.server.common.blockaliasmap.impl}} in HDFS-11902. I 
can fix this when I post the next version of the patch for HDFS-11902.
# {{ITAliasMap}} only contains unit tests. I believe the convention is to start 
the name of the class with Test.
# Why was the block pool id removed from {{FileRegion}}? It was used as a check 
in the DN so that only blocks belonging to the correct block pool id were 
reported to the NN.
# Why rename {{getVolumeMap}} to {{fetchVolumeMap}} in 
{{ProvidedBlockPoolSlice}}?
# In {{startAliasMapServerIfNecessary}}, I think the aliasmap should be started 
only if provided is configured. i.e., check if 
{{DFSConfigKeys.DFS_NAMENODE_PROVIDED_ENABLED}} is set to true.
# Some of the changes have lead to lines crossing the 80 character limit. Can 
you please fix them?

> [AliasMap] Create a version of the AliasMap that runs in memory in the 
> Namenode (leveldb)
> -
>
> Key: HDFS-12665
> URL: https://issues.apache.org/jira/browse/HDFS-12665
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
> Attachments: HDFS-12665-HDFS-9806.001.patch, 
> HDFS-12665-HDFS-9806.002.patch
>
>
> The design of Provided Storage requires the use of an AliasMap to manage the 
> mapping between blocks of files on the local HDFS and ranges of files on a 
> remote storage system. To reduce load from the Namenode, this can be done 
> using a pluggable external service (e.g. AzureTable, Cassandra, Ratis). 
> However, to aide adoption and ease of deployment, we propose an in memory 
> version.
> This AliasMap will be a wrapper around LevelDB (already a dependency from the 
> Timeline Service) and use protobuf for the key (blockpool, blockid, and 
> genstamp) and the value (url, offset, length, nonce). The in memory service 
> will also have a configurable port on which it will listen for updates from 
> Storage Policy Satisfier (SPS) Coordinating Datanodes (C-DN).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.

2017-10-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16213620#comment-16213620
 ] 

Hadoop QA commented on HDFS-11902:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  0m 
23s{color} | {color:red} Docker failed to build yetus/hadoop:71bbb86. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-11902 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12893383/HDFS-11902-HDFS-9806.009.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21774/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [READ] Merge BlockFormatProvider and FileRegionProvider.
> 
>
> Key: HDFS-11902
> URL: https://issues.apache.org/jira/browse/HDFS-11902
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11902-HDFS-9806.001.patch, 
> HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, 
> HDFS-11902-HDFS-9806.004.patch, HDFS-11902-HDFS-9806.005.patch, 
> HDFS-11902-HDFS-9806.006.patch, HDFS-11902-HDFS-9806.007.patch, 
> HDFS-11902-HDFS-9806.008.patch, HDFS-11902-HDFS-9806.009.patch
>
>
> Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform 
> almost the same function on the Namenode and Datanode respectively. This JIRA 
> is to merge them into one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.

2017-10-20 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11902:
--
Attachment: (was: HDFS-11902-HDFS-9806.009.patch)

> [READ] Merge BlockFormatProvider and FileRegionProvider.
> 
>
> Key: HDFS-11902
> URL: https://issues.apache.org/jira/browse/HDFS-11902
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11902-HDFS-9806.001.patch, 
> HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, 
> HDFS-11902-HDFS-9806.004.patch, HDFS-11902-HDFS-9806.005.patch, 
> HDFS-11902-HDFS-9806.006.patch, HDFS-11902-HDFS-9806.007.patch, 
> HDFS-11902-HDFS-9806.008.patch, HDFS-11902-HDFS-9806.009.patch
>
>
> Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform 
> almost the same function on the Namenode and Datanode respectively. This JIRA 
> is to merge them into one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.

2017-10-20 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11902:
--
Attachment: HDFS-11902-HDFS-9806.009.patch

> [READ] Merge BlockFormatProvider and FileRegionProvider.
> 
>
> Key: HDFS-11902
> URL: https://issues.apache.org/jira/browse/HDFS-11902
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11902-HDFS-9806.001.patch, 
> HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, 
> HDFS-11902-HDFS-9806.004.patch, HDFS-11902-HDFS-9806.005.patch, 
> HDFS-11902-HDFS-9806.006.patch, HDFS-11902-HDFS-9806.007.patch, 
> HDFS-11902-HDFS-9806.008.patch, HDFS-11902-HDFS-9806.009.patch
>
>
> Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform 
> almost the same function on the Namenode and Datanode respectively. This JIRA 
> is to merge them into one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.

2017-10-20 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11902:
--
Status: Patch Available  (was: Open)

> [READ] Merge BlockFormatProvider and FileRegionProvider.
> 
>
> Key: HDFS-11902
> URL: https://issues.apache.org/jira/browse/HDFS-11902
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11902-HDFS-9806.001.patch, 
> HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, 
> HDFS-11902-HDFS-9806.004.patch, HDFS-11902-HDFS-9806.005.patch, 
> HDFS-11902-HDFS-9806.006.patch, HDFS-11902-HDFS-9806.007.patch, 
> HDFS-11902-HDFS-9806.008.patch, HDFS-11902-HDFS-9806.009.patch
>
>
> Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform 
> almost the same function on the Namenode and Datanode respectively. This JIRA 
> is to merge them into one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.

2017-10-20 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11902:
--
Status: Open  (was: Patch Available)

> [READ] Merge BlockFormatProvider and FileRegionProvider.
> 
>
> Key: HDFS-11902
> URL: https://issues.apache.org/jira/browse/HDFS-11902
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11902-HDFS-9806.001.patch, 
> HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, 
> HDFS-11902-HDFS-9806.004.patch, HDFS-11902-HDFS-9806.005.patch, 
> HDFS-11902-HDFS-9806.006.patch, HDFS-11902-HDFS-9806.007.patch, 
> HDFS-11902-HDFS-9806.008.patch, HDFS-11902-HDFS-9806.009.patch
>
>
> Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform 
> almost the same function on the Namenode and Datanode respectively. This JIRA 
> is to merge them into one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12681) Fold HdfsLocatedFileStatus into HdfsFileStatus

2017-10-20 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16213581#comment-16213581
 ] 

Chris Douglas commented on HDFS-12681:
--

This adds a builder pattern for {{HdfsFileStatus}}.

[~ste...@apache.org], [~andrew.wang] do you have cycles to take a look?

> Fold HdfsLocatedFileStatus into HdfsFileStatus
> --
>
> Key: HDFS-12681
> URL: https://issues.apache.org/jira/browse/HDFS-12681
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chris Douglas
>Priority: Minor
> Attachments: HDFS-12681.00.patch, HDFS-12681.01.patch
>
>
> {{HdfsLocatedFileStatus}} is a subtype of {{HdfsFileStatus}}, but not of 
> {{LocatedFileStatus}}. Conversion requires copying common fields and shedding 
> unknown data. It would be cleaner and sufficient for {{HdfsFileStatus}} to 
> extend {{LocatedFileStatus}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12518) Re-encryption should handle task cancellation and progress better

2017-10-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16213577#comment-16213577
 ] 

Hudson commented on HDFS-12518:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13120 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13120/])
HDFS-12518. Re-encryption should handle task cancellation and progress (xiao: 
rev 248d9b6fff648cdb02581d458556b6f7c090ef1a)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionHandler.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirEncryptionZoneOp.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestReencryption.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionUpdater.java


> Re-encryption should handle task cancellation and progress better
> -
>
> Key: HDFS-12518
> URL: https://issues.apache.org/jira/browse/HDFS-12518
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 3.0.0-beta1
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Fix For: 3.0.0
>
> Attachments: HDFS-12518.01.patch, HDFS-12518.02.patch, 
> HDFS-12518.03.patch
>
>
> Re-encryption should handle task cancellation and progress tracking better in 
> general.
> In a recent internal report, a canceled re-encryption could lead to the 
> progress of the zone being 'Processing' forever. Sending a new cancel command 
> would make it complete, but new re-encryptions for the same zone wouldn't 
> work because the canceled future is not removed.
> This jira proposes to fix that, and enhance the currently handling so new 
> command would start from a clean state.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-7878) API - expose an unique file identifier

2017-10-20 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-7878:

Attachment: HDFS-7878.16.patch

> API - expose an unique file identifier
> --
>
> Key: HDFS-7878
> URL: https://issues.apache.org/jira/browse/HDFS-7878
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7878.01.patch, HDFS-7878.02.patch, 
> HDFS-7878.03.patch, HDFS-7878.04.patch, HDFS-7878.05.patch, 
> HDFS-7878.06.patch, HDFS-7878.07.patch, HDFS-7878.08.patch, 
> HDFS-7878.09.patch, HDFS-7878.10.patch, HDFS-7878.11.patch, 
> HDFS-7878.12.patch, HDFS-7878.13.patch, HDFS-7878.14.patch, 
> HDFS-7878.15.patch, HDFS-7878.16.patch, HDFS-7878.patch
>
>
> See HDFS-487.
> Even though that is resolved as duplicate, the ID is actually not exposed by 
> the JIRA it supposedly duplicates.
> INode ID for the file should be easy to expose; alternatively ID could be 
> derived from block IDs, to account for appends...
> This is useful e.g. for cache key by file, to make sure cache stays correct 
> when file is overwritten.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12683) DFSZKFailOverController re-order logic for logging Exception

2017-10-20 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16213532#comment-16213532
 ] 

Bharat Viswanadham commented on HDFS-12683:
---

Updated patch v09 to fix review comments.

> DFSZKFailOverController re-order logic for logging Exception
> 
>
> Key: HDFS-12683
> URL: https://issues.apache.org/jira/browse/HDFS-12683
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12683.01.patch, HDFS-12683.02.patch, 
> HDFS-12683.03.patch, HDFS-12683.04.patch, HDFS-12683.05.patch, 
> HDFS-12683.06.patch, HDFS-12683.07.patch, HDFS-12683.08.patch, 
> HDFS-12683.09.patch
>
>
> The ZKFC should log fatal exceptions before closing the connections and 
> terminating server.
> Occasionally we have seen DFSZKFailOver shutdown, but no exception or no 
> error being logged.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12683) DFSZKFailOverController re-order logic for logging Exception

2017-10-20 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-12683:
--
Attachment: HDFS-12683.09.patch

> DFSZKFailOverController re-order logic for logging Exception
> 
>
> Key: HDFS-12683
> URL: https://issues.apache.org/jira/browse/HDFS-12683
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12683.01.patch, HDFS-12683.02.patch, 
> HDFS-12683.03.patch, HDFS-12683.04.patch, HDFS-12683.05.patch, 
> HDFS-12683.06.patch, HDFS-12683.07.patch, HDFS-12683.08.patch, 
> HDFS-12683.09.patch
>
>
> The ZKFC should log fatal exceptions before closing the connections and 
> terminating server.
> Occasionally we have seen DFSZKFailOver shutdown, but no exception or no 
> error being logged.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12653) Implement toArray() and subArray() for ReadOnlyList

2017-10-20 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-12653:
--
Attachment: HDFS-12653.01.patch

Attached v01 patch to address the following
1. Implemented {{ReadOnlyList#toArray()}} and {{ReadOnlyList#subArray()}} to 
return an array view of the backing list
2. TestReadOnly - unit tests to verify various contracts in ReadOnlyList. 
ReadOnly#toArray() and ReadOnlyList#subArray() can be made use when getting 
attributes from INodeAttributesProvider (HDFS-12652) and when working on the 
children list for a snapshot. Will follow on these after completing this jira.
[~eddyxu], [~yzhangal], [~daryn], can you please take a look at the patch. 

> Implement toArray() and subArray() for ReadOnlyList
> ---
>
> Key: HDFS-12653
> URL: https://issues.apache.org/jira/browse/HDFS-12653
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-12653.01.patch
>
>
> {{ReadOnlyList}} today gives an unmodifiable view of the backing List. This 
> list supports following Util methods for easy construction of read only views 
> of any given list. 
> {noformat}
> public static  ReadOnlyList asReadOnlyList(final List list) 
> public static  List asList(final ReadOnlyList list)
> {noformat}
> {{asList}} above additionally overrides {{Object[] toArray()}} of the 
> {{java.util.List}} interface. Unlike the {{java.util.List}}, the above one 
> returns an array of Objects referring to the backing list and avoid any 
> copying of objects. Given that we have many usages of read only lists,
> 1. Lets have a light-weight / shared-view {{toArray()}} implementation for 
> {{ReadOnlyList}} as well. 
> 2. Additionally, similar to {{java.util.List#subList(fromIndex, toIndex)}}, 
> lets have {{ReadOnlyList#subArray(fromIndex, toIndex)}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7878) API - expose an unique file identifier

2017-10-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16213510#comment-16213510
 ] 

Hadoop QA commented on HDFS-7878:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m  
8s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
43s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 16m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
13s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 33s{color} | {color:orange} root: The patch generated 8 new + 346 unchanged 
- 2 fixed = 354 total (was 348) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 58s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  1m  
7s{color} | {color:red} hadoop-common-project_hadoop-common generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
54s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
48s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}127m 59s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}247m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
|   | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:ca8ddc6 

[jira] [Comment Edited] (HDFS-12683) DFSZKFailOverController re-order logic for logging Exception

2017-10-20 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16213506#comment-16213506
 ] 

Bharat Viswanadham edited comment on HDFS-12683 at 10/20/17 11:47 PM:
--

Updated the patch.
Attached patch v8 to fix checkstyle issues.

Ran Tests locally and now the failed test cases are passing.


was (Author: bharatviswa):
Updated the patch.
Attached patch v8 to fix checkstyle issues.

> DFSZKFailOverController re-order logic for logging Exception
> 
>
> Key: HDFS-12683
> URL: https://issues.apache.org/jira/browse/HDFS-12683
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12683.01.patch, HDFS-12683.02.patch, 
> HDFS-12683.03.patch, HDFS-12683.04.patch, HDFS-12683.05.patch, 
> HDFS-12683.06.patch, HDFS-12683.07.patch, HDFS-12683.08.patch
>
>
> The ZKFC should log fatal exceptions before closing the connections and 
> terminating server.
> Occasionally we have seen DFSZKFailOver shutdown, but no exception or no 
> error being logged.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12683) DFSZKFailOverController re-order logic for logging Exception

2017-10-20 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16213506#comment-16213506
 ] 

Bharat Viswanadham commented on HDFS-12683:
---

Updated the patch.
Attached patch v8 to fix checkstyle issues.

> DFSZKFailOverController re-order logic for logging Exception
> 
>
> Key: HDFS-12683
> URL: https://issues.apache.org/jira/browse/HDFS-12683
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12683.01.patch, HDFS-12683.02.patch, 
> HDFS-12683.03.patch, HDFS-12683.04.patch, HDFS-12683.05.patch, 
> HDFS-12683.06.patch, HDFS-12683.07.patch, HDFS-12683.08.patch
>
>
> The ZKFC should log fatal exceptions before closing the connections and 
> terminating server.
> Occasionally we have seen DFSZKFailOver shutdown, but no exception or no 
> error being logged.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12683) DFSZKFailOverController re-order logic for logging Exception

2017-10-20 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-12683:
--
Attachment: HDFS-12683.08.patch

> DFSZKFailOverController re-order logic for logging Exception
> 
>
> Key: HDFS-12683
> URL: https://issues.apache.org/jira/browse/HDFS-12683
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12683.01.patch, HDFS-12683.02.patch, 
> HDFS-12683.03.patch, HDFS-12683.04.patch, HDFS-12683.05.patch, 
> HDFS-12683.06.patch, HDFS-12683.07.patch, HDFS-12683.08.patch
>
>
> The ZKFC should log fatal exceptions before closing the connections and 
> terminating server.
> Occasionally we have seen DFSZKFailOver shutdown, but no exception or no 
> error being logged.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12681) Fold HdfsLocatedFileStatus into HdfsFileStatus

2017-10-20 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-12681:
-
Attachment: HDFS-12681.01.patch

> Fold HdfsLocatedFileStatus into HdfsFileStatus
> --
>
> Key: HDFS-12681
> URL: https://issues.apache.org/jira/browse/HDFS-12681
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chris Douglas
>Priority: Minor
> Attachments: HDFS-12681.00.patch, HDFS-12681.01.patch
>
>
> {{HdfsLocatedFileStatus}} is a subtype of {{HdfsFileStatus}}, but not of 
> {{LocatedFileStatus}}. Conversion requires copying common fields and shedding 
> unknown data. It would be cleaner and sufficient for {{HdfsFileStatus}} to 
> extend {{LocatedFileStatus}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.

2017-10-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16213502#comment-16213502
 ] 

Hadoop QA commented on HDFS-11902:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  0m 
15s{color} | {color:red} Docker failed to build yetus/hadoop:71bbb86. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-11902 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12893349/HDFS-11902-HDFS-9806.009.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21770/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [READ] Merge BlockFormatProvider and FileRegionProvider.
> 
>
> Key: HDFS-11902
> URL: https://issues.apache.org/jira/browse/HDFS-11902
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11902-HDFS-9806.001.patch, 
> HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, 
> HDFS-11902-HDFS-9806.004.patch, HDFS-11902-HDFS-9806.005.patch, 
> HDFS-11902-HDFS-9806.006.patch, HDFS-11902-HDFS-9806.007.patch, 
> HDFS-11902-HDFS-9806.008.patch, HDFS-11902-HDFS-9806.009.patch
>
>
> Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform 
> almost the same function on the Namenode and Datanode respectively. This JIRA 
> is to merge them into one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12518) Re-encryption should handle task cancellation and progress better

2017-10-20 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-12518:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-3.0.
Thanks for the reviews Wei-Chiu!

> Re-encryption should handle task cancellation and progress better
> -
>
> Key: HDFS-12518
> URL: https://issues.apache.org/jira/browse/HDFS-12518
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 3.0.0-beta1
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Fix For: 3.0.0
>
> Attachments: HDFS-12518.01.patch, HDFS-12518.02.patch, 
> HDFS-12518.03.patch
>
>
> Re-encryption should handle task cancellation and progress tracking better in 
> general.
> In a recent internal report, a canceled re-encryption could lead to the 
> progress of the zone being 'Processing' forever. Sending a new cancel command 
> would make it complete, but new re-encryptions for the same zone wouldn't 
> work because the canceled future is not removed.
> This jira proposes to fix that, and enhance the currently handling so new 
> command would start from a clean state.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12396) Webhdfs file system should get delegation token from kms provider.

2017-10-20 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-12396:
--
Attachment: HDFS-12396.002.patch

Attached a new patch.
bq. In KMSUtil, I'm not fond of returning null when passed null
Null is not an invalid input. If EZ is not enabled, this will be null.

{quote}
The moved/new methods seem like they should be in KMSUtil, rather than 
DFSUtilClient, with private/unstable annotations in case we need to make 
further modifications.
{quote}
Added. After moving all the methods from {{hadoop-hdfs-client}} to 
{{hadoop-common}}, I also needed to move {{UnknownCipherSuiteException.java}} 
from {{hadoop-hdfs-project/hadoop-hdfs-client}} to 
{{hadoop-common-project/hadoop-common}}. I think nothing should break.

{quote}
Minor, KeyProviderHelper is rather generic and doesn't convey what it does. I'd 
consider something more like KeyProviderTokenAdapter, KeyProviderTokenIssuer, 
etc.
{quote}
Addressed.

{quote}
Does WebHdfsFileSystem#keyProvider really need to exist and only be set for 
tests? Could a spy be used instead?
{quote}
Addressed.
Please review.

> Webhdfs file system should get delegation token from kms provider.
> --
>
> Key: HDFS-12396
> URL: https://issues.apache.org/jira/browse/HDFS-12396
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: encryption, kms, webhdfs
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HDFS-12396.001.patch, HDFS-12396.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11096) Support rolling upgrade between 2.x and 3.x

2017-10-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16213441#comment-16213441
 ] 

Hadoop QA commented on HDFS-11096:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
6s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
7m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
21s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} pylint {color} | {color:orange}  0m  
7s{color} | {color:orange} The patch generated 422 new + 0 unchanged - 0 fixed 
= 422 total (was 0) {color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red}  0m  
4s{color} | {color:red} The patch generated 7 new + 21 unchanged - 0 fixed = 28 
total (was 21) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 26s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
39s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  6m  
6s{color} | {color:green} hadoop-yarn in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:ca8ddc6 |
| JIRA Issue | HDFS-11096 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12893339/HDFS-11096.006.patch |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  pylint  |
| uname | Linux c3d5608b3c40 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f36cbc8 |
| shellcheck | v0.4.6 |
| pylint | v1.7.4 |
| pylint | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21769/artifact/patchprocess/diff-patch-pylint.txt
 |
| shellcheck | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21769/artifact/patchprocess/diff-patch-shellcheck.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21769/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-yarn-project/hadoop-yarn U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21769/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Support rolling upgrade between 2.x and 3.x
> ---
>
> Key: HDFS-11096
> URL: https://issues.apache.org/jira/browse/HDFS-11096
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rolling upgrades
>Affects Versions: 3.0

[jira] [Updated] (HDFS-12681) Fold HdfsLocatedFileStatus into HdfsFileStatus

2017-10-20 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-12681:
-
Status: Patch Available  (was: Open)

> Fold HdfsLocatedFileStatus into HdfsFileStatus
> --
>
> Key: HDFS-12681
> URL: https://issues.apache.org/jira/browse/HDFS-12681
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chris Douglas
>Priority: Minor
> Attachments: HDFS-12681.00.patch
>
>
> {{HdfsLocatedFileStatus}} is a subtype of {{HdfsFileStatus}}, but not of 
> {{LocatedFileStatus}}. Conversion requires copying common fields and shedding 
> unknown data. It would be cleaner and sufficient for {{HdfsFileStatus}} to 
> extend {{LocatedFileStatus}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12681) Fold HdfsLocatedFileStatus into HdfsFileStatus

2017-10-20 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-12681:
-
Attachment: HDFS-12681.00.patch

> Fold HdfsLocatedFileStatus into HdfsFileStatus
> --
>
> Key: HDFS-12681
> URL: https://issues.apache.org/jira/browse/HDFS-12681
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chris Douglas
>Priority: Minor
> Attachments: HDFS-12681.00.patch
>
>
> {{HdfsLocatedFileStatus}} is a subtype of {{HdfsFileStatus}}, but not of 
> {{LocatedFileStatus}}. Conversion requires copying common fields and shedding 
> unknown data. It would be cleaner and sufficient for {{HdfsFileStatus}} to 
> extend {{LocatedFileStatus}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11467) Support ErasureCoding section in OIV XML/ReverseXML

2017-10-20 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16213420#comment-16213420
 ] 

Lei (Eddy) Xu commented on HDFS-11467:
--

LGTM.  Would bring in [~xiaochen] for reviews, as he is working on related JIRA 
HDFS-12682

> Support ErasureCoding section in OIV XML/ReverseXML
> ---
>
> Key: HDFS-11467
> URL: https://issues.apache.org/jira/browse/HDFS-11467
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: tools
>Affects Versions: 3.0.0-alpha4
>Reporter: Wei-Chiu Chuang
>Assignee: Huafeng Wang
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11467.001.patch, HDFS-11467.002.patch
>
>
> As discussed in HDFS-7859, after ErasureCoding section is added into fsimage, 
> we would like to also support exporting this section into an XML back and 
> forth using the OIV tool.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12683) DFSZKFailOverController re-order logic for logging Exception

2017-10-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16213418#comment-16213418
 ] 

Hadoop QA commented on HDFS-12683:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 39s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 14s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 21s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}170m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverControllerStress |
|   | hadoop.ha.TestZKFailoverController |
|   | hadoop.hdfs.tools.TestDFSZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:ca8ddc6 |
| JIRA Issue | HDFS-12683 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12893316/HDFS-12683.07.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f2c5dd3a546a 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6b7c87c |
| Default Java | 1.8.0_131 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21767/art

[jira] [Updated] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.

2017-10-20 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11902:
--
Attachment: (was: HDFS-11902-HDFS-9806.009.patch)

> [READ] Merge BlockFormatProvider and FileRegionProvider.
> 
>
> Key: HDFS-11902
> URL: https://issues.apache.org/jira/browse/HDFS-11902
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11902-HDFS-9806.001.patch, 
> HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, 
> HDFS-11902-HDFS-9806.004.patch, HDFS-11902-HDFS-9806.005.patch, 
> HDFS-11902-HDFS-9806.006.patch, HDFS-11902-HDFS-9806.007.patch, 
> HDFS-11902-HDFS-9806.008.patch, HDFS-11902-HDFS-9806.009.patch
>
>
> Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform 
> almost the same function on the Namenode and Datanode respectively. This JIRA 
> is to merge them into one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.

2017-10-20 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11902:
--
Status: Patch Available  (was: Open)

> [READ] Merge BlockFormatProvider and FileRegionProvider.
> 
>
> Key: HDFS-11902
> URL: https://issues.apache.org/jira/browse/HDFS-11902
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11902-HDFS-9806.001.patch, 
> HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, 
> HDFS-11902-HDFS-9806.004.patch, HDFS-11902-HDFS-9806.005.patch, 
> HDFS-11902-HDFS-9806.006.patch, HDFS-11902-HDFS-9806.007.patch, 
> HDFS-11902-HDFS-9806.008.patch, HDFS-11902-HDFS-9806.009.patch
>
>
> Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform 
> almost the same function on the Namenode and Datanode respectively. This JIRA 
> is to merge them into one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.

2017-10-20 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11902:
--
Attachment: HDFS-11902-HDFS-9806.009.patch

> [READ] Merge BlockFormatProvider and FileRegionProvider.
> 
>
> Key: HDFS-11902
> URL: https://issues.apache.org/jira/browse/HDFS-11902
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11902-HDFS-9806.001.patch, 
> HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, 
> HDFS-11902-HDFS-9806.004.patch, HDFS-11902-HDFS-9806.005.patch, 
> HDFS-11902-HDFS-9806.006.patch, HDFS-11902-HDFS-9806.007.patch, 
> HDFS-11902-HDFS-9806.008.patch, HDFS-11902-HDFS-9806.009.patch
>
>
> Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform 
> almost the same function on the Namenode and Datanode respectively. This JIRA 
> is to merge them into one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.

2017-10-20 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11902:
--
Status: Open  (was: Patch Available)

> [READ] Merge BlockFormatProvider and FileRegionProvider.
> 
>
> Key: HDFS-11902
> URL: https://issues.apache.org/jira/browse/HDFS-11902
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11902-HDFS-9806.001.patch, 
> HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, 
> HDFS-11902-HDFS-9806.004.patch, HDFS-11902-HDFS-9806.005.patch, 
> HDFS-11902-HDFS-9806.006.patch, HDFS-11902-HDFS-9806.007.patch, 
> HDFS-11902-HDFS-9806.008.patch, HDFS-11902-HDFS-9806.009.patch
>
>
> Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform 
> almost the same function on the Namenode and Datanode respectively. This JIRA 
> is to merge them into one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12396) Webhdfs file system should get delegation token from kms provider.

2017-10-20 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-12396:
--
Status: Open  (was: Patch Available)

Thanks Daryn for review.
Canceling patch to address [~daryn]'s review comments.

> Webhdfs file system should get delegation token from kms provider.
> --
>
> Key: HDFS-12396
> URL: https://issues.apache.org/jira/browse/HDFS-12396
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: encryption, kms, webhdfs
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HDFS-12396.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12518) Re-encryption should handle task cancellation and progress better

2017-10-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16213323#comment-16213323
 ] 

Hadoop QA commented on HDFS-12518:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m  
7s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 22s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 59s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m 16s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}146m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:ca8ddc6 |
| JIRA Issue | HDFS-12518 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12893326/HDFS-12518.03.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9723b21c6a52 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6b7c87c |
| Default Java | 1.8.0_131 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21765/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21765/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21765/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Re-encryption should handle task cancellation and progress better
> -

[jira] [Commented] (HDFS-12497) Re-enable TestDFSStripedOutputStreamWithFailure tests

2017-10-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16213318#comment-16213318
 ] 

Hudson commented on HDFS-12497:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13119 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13119/])
HDFS-12497. Re-enable TestDFSStripedOutputStreamWithFailure tests. (wang: rev 
0477eff8be4505ad2730ec16621105b6df9099ae)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedOutputStreamWithFailure.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedOutputStreamWithFailureWithRandomECPolicy.java


> Re-enable TestDFSStripedOutputStreamWithFailure tests
> -
>
> Key: HDFS-12497
> URL: https://issues.apache.org/jira/browse/HDFS-12497
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-beta1
>Reporter: Andrew Wang
>Assignee: Huafeng Wang
>  Labels: flaky-test, hdfs-ec-3.0-must-do
> Fix For: 3.0.0
>
> Attachments: HDFS-12497.001.patch, HDFS-12497.002.patch, 
> HDFS-12497.003.patch, HDFS-12497.004.patch
>
>
> We disabled this suite of tests in HDFS-12417 since they were very flaky. We 
> should fix these tests and re-enable them.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12618) fsck -includeSnapshots reports wrong amount of total blocks

2017-10-20 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16213269#comment-16213269
 ] 

Xiao Chen commented on HDFS-12618:
--

Thanks for reporting and working on this [~wchevreuil]!

I agree this is an issue, but not sure what the best solution would be - this 
appears to be a difficult problem. Let me research into this too see if any 
other ideas pop up.

The reason webui shows correctly is it's just showing the total number of 
blocks from the block manager. I'm afraid we don't have information as to 
'Total blocks under to a directory', so not useful for fsck.

Some issues I can see from the current patch:
- {{snapshotSeenBlocks}} could be huge, if the snapshot dir is at the top 
level, and has a lot of blocks under it. In extreme cases, this may put 
pressure on NN. At the minimum we should make sure the extra space is allocated 
only when {{-includeSnapshots}} is set.
- where inodes are involved, file system locks are required. This adds burden 
to the NN which should be minimized.
- Catching a {{RunTimeException}} and re-throw looks like a bad practice. What 
are the RTEs that could be thrown that we need to wrap?



> fsck -includeSnapshots reports wrong amount of total blocks
> ---
>
> Key: HDFS-12618
> URL: https://issues.apache.org/jira/browse/HDFS-12618
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0-alpha3
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Minor
> Attachments: HDFS-121618.initial, HDFS-12618.001.patch
>
>
> When snapshot is enabled, if a file is deleted but is contained by a 
> snapshot, *fsck* will not reported blocks for such file, showing different 
> number of *total blocks* than what is exposed in the Web UI. 
> This should be fine, as *fsck* provides *-includeSnapshots* option. The 
> problem is that *-includeSnapshots* option causes *fsck* to count blocks for 
> every occurrence of a file on snapshots, which is wrong because these blocks 
> should be counted only once (for instance, if a 100MB file is present on 3 
> snapshots, it would still map to one block only in hdfs). This causes fsck to 
> report much more blocks than what actually exist in hdfs and is reported in 
> the Web UI.
> Here's an example:
> 1) HDFS has two files of 2 blocks each:
> {noformat}
> $ hdfs dfs -ls -R /
> drwxr-xr-x   - root supergroup  0 2017-10-07 21:21 /snap-test
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:16 /snap-test/file1
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:17 /snap-test/file2
> drwxr-xr-x   - root supergroup  0 2017-05-13 13:03 /test
> {noformat} 
> 2) There are two snapshots, with the two files present on each of the 
> snapshots:
> {noformat}
> $ hdfs dfs -ls -R /snap-test/.snapshot
> drwxr-xr-x   - root supergroup  0 2017-10-07 21:21 
> /snap-test/.snapshot/snap1
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:16 
> /snap-test/.snapshot/snap1/file1
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:17 
> /snap-test/.snapshot/snap1/file2
> drwxr-xr-x   - root supergroup  0 2017-10-07 21:21 
> /snap-test/.snapshot/snap2
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:16 
> /snap-test/.snapshot/snap2/file1
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:17 
> /snap-test/.snapshot/snap2/file2
> {noformat}
> 3) *fsck -includeSnapshots* reports 12 blocks in total (4 blocks for the 
> normal file path, plus 4 blocks for each snapshot path):
> {noformat}
> $ hdfs fsck / -includeSnapshots
> FSCK started by root (auth:SIMPLE) from /127.0.0.1 for path / at Mon Oct 09 
> 15:15:36 BST 2017
> Status: HEALTHY
>  Number of data-nodes:1
>  Number of racks: 1
>  Total dirs:  6
>  Total symlinks:  0
> Replicated Blocks:
>  Total size:  1258291200 B
>  Total files: 6
>  Total blocks (validated):12 (avg. block size 104857600 B)
>  Minimally replicated blocks: 12 (100.0 %)
>  Over-replicated blocks:  0 (0.0 %)
>  Under-replicated blocks: 0 (0.0 %)
>  Mis-replicated blocks:   0 (0.0 %)
>  Default replication factor:  1
>  Average block replication:   1.0
>  Missing blocks:  0
>  Corrupt blocks:  0
>  Missing replicas:0 (0.0 %)
> {noformat}
> 4) Web UI shows the correct number (4 blocks only):
> {noformat}
> Security is off.
> Safemode is off.
> 5 files and directories, 4 blocks = 9 total filesystem object(s).
> {noformat}
> I would like to work on this solution, will propose an initial solution 
> shortly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, 

[jira] [Updated] (HDFS-11096) Support rolling upgrade between 2.x and 3.x

2017-10-20 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HDFS-11096:
-
Attachment: HDFS-11096.006.patch

It's possible, but will be tough.

I worked with [~rchiang] to get past the YARN issues I was having. By 
specifying both hostname (required by shell scripts) and the address (hostname 
+ ports) for all of the YARN ports, I was able to get it to work. I feel this 
is possibly an incompatible change in YARN, being that YARN works fine by just 
specifying the hostname (as long as everything's going to use the default 
ports) in Hadoop 2.x, but I'll leave that [~rchiang]'s judgement if there's a 
good enough reason and we can put some documentation in place. Specifying the 
ports in a Hadoop 2.x cluster prior to upgrade wouldn't be too bad.

I then repeatedly encountered a lot of failures due to timeouts with both 
ZooKeeper and JournalNodes. I increased a couple of timeouts and was able to 
get it working reliably again. Other changes in the revision I'm posting (.006) 
right now:

* where it applies to both YARN and HDFS, I've stopped used NAMENODES and 
DATANODES, but MASTERS and WORKERS
* I fixed the sole shellcheck issue above. It was not raised locally, so my 
version must be out of sync. Can't confirm until Yetus does that I've 
eliminated others
* I've added more distcp-over-webhdfs tests: to, from, and on both old and new 
clusters.They're all working perfeclt.
 
Currently the only issue I see is that the ResourceManager port 8032 stops 
listening towards the end of the rolling upgrade test. ResourceManager does not 
log any problems, and I don't see any other issues. But after we stop all the 
loops of MapReduce jobs that were running during the rolling upgrade, we can't 
query the job history to confirm they were all successful, because it can't 
connect to :8032 on either node. Other ResourceManager services are still 
listening. This happens even if I comment out the YARN rolling upgrade step.

I may need to get some more help from [~rchiang] debugging that again. I'm also 
going to try running this against branch-3.0 instead of trunk, to eliminate 
some instability I may be seeing.

> Support rolling upgrade between 2.x and 3.x
> ---
>
> Key: HDFS-11096
> URL: https://issues.apache.org/jira/browse/HDFS-11096
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rolling upgrades
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Sean Mackrory
>Priority: Blocker
> Attachments: HDFS-11096.001.patch, HDFS-11096.002.patch, 
> HDFS-11096.003.patch, HDFS-11096.004.patch, HDFS-11096.005.patch, 
> HDFS-11096.006.patch
>
>
> trunk has a minimum software version of 3.0.0-alpha1. This means we can't 
> rolling upgrade between branch-2 and trunk.
> This is a showstopper for large deployments. Unless there are very compelling 
> reasons to break compatibility, let's restore the ability to rolling upgrade 
> to 3.x releases.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12686) Erasure coding system policy state is not correctly saved and loaded during real cluster restart

2017-10-20 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12686:
---
Priority: Blocker  (was: Critical)

> Erasure coding system policy state is not correctly saved and loaded during 
> real cluster restart
> 
>
> Key: HDFS-12686
> URL: https://issues.apache.org/jira/browse/HDFS-12686
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: SammiChen
>Assignee: SammiChen
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
>
> Inspired by HDFS-12682,  I found the system erasure coding policy state will  
> not  be correctly saved and loaded in a real cluster.  Through there are such 
> kind of unit tests and all are passed with MiniCluster. It's because the 
> MiniCluster keeps the same static system erasure coding policy object after 
> the NN restart operation. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11467) Support ErasureCoding section in OIV XML/ReverseXML

2017-10-20 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-11467:
---
Priority: Blocker  (was: Major)

> Support ErasureCoding section in OIV XML/ReverseXML
> ---
>
> Key: HDFS-11467
> URL: https://issues.apache.org/jira/browse/HDFS-11467
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: tools
>Affects Versions: 3.0.0-alpha4
>Reporter: Wei-Chiu Chuang
>Assignee: Huafeng Wang
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11467.001.patch, HDFS-11467.002.patch
>
>
> As discussed in HDFS-7859, after ErasureCoding section is added into fsimage, 
> we would like to also support exporting this section into an XML back and 
> forth using the OIV tool.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12497) Re-enable TestDFSStripedOutputStreamWithFailure tests

2017-10-20 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12497:
---
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Thanks Huafeng, committed to trunk and branch-3.0!

> Re-enable TestDFSStripedOutputStreamWithFailure tests
> -
>
> Key: HDFS-12497
> URL: https://issues.apache.org/jira/browse/HDFS-12497
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-beta1
>Reporter: Andrew Wang
>Assignee: Huafeng Wang
>  Labels: flaky-test, hdfs-ec-3.0-must-do
> Fix For: 3.0.0
>
> Attachments: HDFS-12497.001.patch, HDFS-12497.002.patch, 
> HDFS-12497.003.patch, HDFS-12497.004.patch
>
>
> We disabled this suite of tests in HDFS-12417 since they were very flaky. We 
> should fix these tests and re-enable them.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11096) Support rolling upgrade between 2.x and 3.x

2017-10-20 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16213176#comment-16213176
 ] 

Andrew Wang commented on HDFS-11096:


Folks, is this going to be committed by the end of the month? Haven't seen an 
update recently.

> Support rolling upgrade between 2.x and 3.x
> ---
>
> Key: HDFS-11096
> URL: https://issues.apache.org/jira/browse/HDFS-11096
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rolling upgrades
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Sean Mackrory
>Priority: Blocker
> Attachments: HDFS-11096.001.patch, HDFS-11096.002.patch, 
> HDFS-11096.003.patch, HDFS-11096.004.patch, HDFS-11096.005.patch
>
>
> trunk has a minimum software version of 3.0.0-alpha1. This means we can't 
> rolling upgrade between branch-2 and trunk.
> This is a showstopper for large deployments. Unless there are very compelling 
> reasons to break compatibility, let's restore the ability to rolling upgrade 
> to 3.x releases.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12249) dfsadmin -metaSave to output maintenance mode blocks

2017-10-20 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16213100#comment-16213100
 ] 

Wei-Chiu Chuang commented on HDFS-12249:


+1

> dfsadmin -metaSave to output maintenance mode blocks
> 
>
> Key: HDFS-12249
> URL: https://issues.apache.org/jira/browse/HDFS-12249
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Wei-Chiu Chuang
>Assignee: Wellington Chevreuil
>Priority: Minor
> Attachments: HDFS-12249.001.patch
>
>
> Found while reviewing for HDFS-12182.
> {quote}
> After the patch, the output of metaSave is:
> Live Datanodes: 0
> Dead Datanodes: 0
> Metasave: Blocks waiting for reconstruction: 0
> Metasave: Blocks currently missing: 1
> file16387: blk_0_1 MISSING (replicas: l: 0 d: 0 c: 2 e: 0)  
> 1.1.1.1:9866(corrupt) (block deletions maybe out of date) :  
> 2.2.2.2:9866(corrupt) (block deletions maybe out of date) : 
> Mis-replicated blocks that have been postponed:
> Metasave: Blocks being reconstructed: 0
> Metasave: Blocks 0 waiting deletion from 0 datanodes.
> Corrupt Blocks:
> Block=0   Node=1.1.1.1:9866   StorageID=s1StorageState=NORMAL 
> TotalReplicas=2 Reason=GENSTAMP_MISMATCH
> Block=0   Node=2.2.2.2:9866   StorageID=s2StorageState=NORMAL 
> TotalReplicas=2 Reason=GENSTAMP_MISMATCH
> Metasave: Number of datanodes: 0
> {quote}
> {quote}
> Looking at the output
> The output is not user friendly — The meaning of "(replicas: l: 0 d: 0 c: 2 
> e: 0)" is not obvious without looking at the code.
> Also, it should print maintenance mode replicas.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12518) Re-encryption should handle task cancellation and progress better

2017-10-20 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16213079#comment-16213079
 ] 

Wei-Chiu Chuang commented on HDFS-12518:


+1. LGTM

> Re-encryption should handle task cancellation and progress better
> -
>
> Key: HDFS-12518
> URL: https://issues.apache.org/jira/browse/HDFS-12518
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 3.0.0-beta1
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-12518.01.patch, HDFS-12518.02.patch, 
> HDFS-12518.03.patch
>
>
> Re-encryption should handle task cancellation and progress tracking better in 
> general.
> In a recent internal report, a canceled re-encryption could lead to the 
> progress of the zone being 'Processing' forever. Sending a new cancel command 
> would make it complete, but new re-encryptions for the same zone wouldn't 
> work because the canceled future is not removed.
> This jira proposes to fix that, and enhance the currently handling so new 
> command would start from a clean state.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12518) Re-encryption should handle task cancellation and progress better

2017-10-20 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-12518:
-
Attachment: HDFS-12518.03.patch

patch 3 to fix checkstyle

> Re-encryption should handle task cancellation and progress better
> -
>
> Key: HDFS-12518
> URL: https://issues.apache.org/jira/browse/HDFS-12518
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 3.0.0-beta1
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-12518.01.patch, HDFS-12518.02.patch, 
> HDFS-12518.03.patch
>
>
> Re-encryption should handle task cancellation and progress tracking better in 
> general.
> In a recent internal report, a canceled re-encryption could lead to the 
> progress of the zone being 'Processing' forever. Sending a new cancel command 
> would make it complete, but new re-encryptions for the same zone wouldn't 
> work because the canceled future is not removed.
> This jira proposes to fix that, and enhance the currently handling so new 
> command would start from a clean state.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12683) DFSZKFailOverController re-order logic for logging Exception

2017-10-20 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16213036#comment-16213036
 ] 

Arpit Agarwal commented on HDFS-12683:
--

+1 pending Jenkins, via reviewboard.
https://reviews.apache.org/r/63168/

Jenkins is probably down, I still can't get to builds.apache.org. See if you 
want to run test-patch locally and attach the output. e.g.
bq. dev-support/bin/test-patch --run-tests --test-parallel=true 
--test-threads=4 
https://issues.apache.org/jira/secure/attachment/12893316/HDFS-12683.07.patch

> DFSZKFailOverController re-order logic for logging Exception
> 
>
> Key: HDFS-12683
> URL: https://issues.apache.org/jira/browse/HDFS-12683
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12683.01.patch, HDFS-12683.02.patch, 
> HDFS-12683.03.patch, HDFS-12683.04.patch, HDFS-12683.05.patch, 
> HDFS-12683.06.patch, HDFS-12683.07.patch
>
>
> The ZKFC should log fatal exceptions before closing the connections and 
> terminating server.
> Occasionally we have seen DFSZKFailOver shutdown, but no exception or no 
> error being logged.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12683) DFSZKFailOverController re-order logic for logging Exception

2017-10-20 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16213036#comment-16213036
 ] 

Arpit Agarwal edited comment on HDFS-12683 at 10/20/17 6:43 PM:


+1 pending Jenkins, via reviewboard.
https://reviews.apache.org/r/63168/

Jenkins is probably down, all HDFS pre-commit builds appear to be failing. See 
if you want to run test-patch locally and attach the output. e.g.
bq. dev-support/bin/test-patch --run-tests --test-parallel=true 
--test-threads=4 
https://issues.apache.org/jira/secure/attachment/12893316/HDFS-12683.07.patch


was (Author: arpitagarwal):
+1 pending Jenkins, via reviewboard.
https://reviews.apache.org/r/63168/

Jenkins is probably down, I still can't get to builds.apache.org. See if you 
want to run test-patch locally and attach the output. e.g.
bq. dev-support/bin/test-patch --run-tests --test-parallel=true 
--test-threads=4 
https://issues.apache.org/jira/secure/attachment/12893316/HDFS-12683.07.patch

> DFSZKFailOverController re-order logic for logging Exception
> 
>
> Key: HDFS-12683
> URL: https://issues.apache.org/jira/browse/HDFS-12683
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12683.01.patch, HDFS-12683.02.patch, 
> HDFS-12683.03.patch, HDFS-12683.04.patch, HDFS-12683.05.patch, 
> HDFS-12683.06.patch, HDFS-12683.07.patch
>
>
> The ZKFC should log fatal exceptions before closing the connections and 
> terminating server.
> Occasionally we have seen DFSZKFailOver shutdown, but no exception or no 
> error being logged.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12607) [READ] Even one dead datanode with PROVIDED storage results in ProvidedStorageInfo being marked as FAILED

2017-10-20 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16213001#comment-16213001
 ] 

Íñigo Goiri commented on HDFS-12607:


* In {{ProvidedStorageMap}}, can we use the {{Logger}} format?
* The comment in {{ProvidedStorageMap#setState()}} is not very clear. Something 
along the lines of the JIRA description would be better. Maybe for the whole 
{{setState()}} method
* Capital for the first word of the comments?
* Can we make {{numFiles > 0}} in the unit test? Should we fail if we 
{{numFiles}} is set to 0 as we are not testing anything?
* Should we do {{waitActive()}} in the test for every file?

> [READ] Even one dead datanode with PROVIDED storage results in 
> ProvidedStorageInfo being marked as FAILED
> -
>
> Key: HDFS-12607
> URL: https://issues.apache.org/jira/browse/HDFS-12607
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12607-HDFS-9806.001.patch, HDFS-12607.repro.patch
>
>
> When a DN configured with PROVIDED storage is marked as dead by the NN, the 
> state of {{providedStorageInfo}} in {{ProvidedStorageMap}} is set to FAILED, 
> and never becomes NORMAL. The state should change to FAILED only if all 
> datanodes with PROVIDED storage are dead, and should be restored back to 
> NORMAL when a Datanode with NORMAL DatanodeStorage reports in.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12683) DFSZKFailOverController re-order logic for logging Exception

2017-10-20 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-12683:
--
Attachment: HDFS-12683.07.patch

> DFSZKFailOverController re-order logic for logging Exception
> 
>
> Key: HDFS-12683
> URL: https://issues.apache.org/jira/browse/HDFS-12683
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12683.01.patch, HDFS-12683.02.patch, 
> HDFS-12683.03.patch, HDFS-12683.04.patch, HDFS-12683.05.patch, 
> HDFS-12683.06.patch, HDFS-12683.07.patch
>
>
> The ZKFC should log fatal exceptions before closing the connections and 
> terminating server.
> Occasionally we have seen DFSZKFailOver shutdown, but no exception or no 
> error being logged.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12683) DFSZKFailOverController re-order logic for logging Exception

2017-10-20 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-12683:
--
Attachment: HDFS-12683.06.patch

> DFSZKFailOverController re-order logic for logging Exception
> 
>
> Key: HDFS-12683
> URL: https://issues.apache.org/jira/browse/HDFS-12683
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12683.01.patch, HDFS-12683.02.patch, 
> HDFS-12683.03.patch, HDFS-12683.04.patch, HDFS-12683.05.patch, 
> HDFS-12683.06.patch
>
>
> The ZKFC should log fatal exceptions before closing the connections and 
> terminating server.
> Occasionally we have seen DFSZKFailOver shutdown, but no exception or no 
> error being logged.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-7878) API - expose an unique file identifier

2017-10-20 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-7878:

Attachment: HDFS-7878.15.patch

> API - expose an unique file identifier
> --
>
> Key: HDFS-7878
> URL: https://issues.apache.org/jira/browse/HDFS-7878
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7878.01.patch, HDFS-7878.02.patch, 
> HDFS-7878.03.patch, HDFS-7878.04.patch, HDFS-7878.05.patch, 
> HDFS-7878.06.patch, HDFS-7878.07.patch, HDFS-7878.08.patch, 
> HDFS-7878.09.patch, HDFS-7878.10.patch, HDFS-7878.11.patch, 
> HDFS-7878.12.patch, HDFS-7878.13.patch, HDFS-7878.14.patch, 
> HDFS-7878.15.patch, HDFS-7878.patch
>
>
> See HDFS-487.
> Even though that is resolved as duplicate, the ID is actually not exposed by 
> the JIRA it supposedly duplicates.
> INode ID for the file should be easy to expose; alternatively ID could be 
> derived from block IDs, to account for appends...
> This is useful e.g. for cache key by file, to make sure cache stays correct 
> when file is overwritten.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12683) DFSZKFailOverController re-order logic for logging Exception

2017-10-20 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-12683:
--
Attachment: HDFS-12683.05.patch

> DFSZKFailOverController re-order logic for logging Exception
> 
>
> Key: HDFS-12683
> URL: https://issues.apache.org/jira/browse/HDFS-12683
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12683.01.patch, HDFS-12683.02.patch, 
> HDFS-12683.03.patch, HDFS-12683.04.patch, HDFS-12683.05.patch
>
>
> The ZKFC should log fatal exceptions before closing the connections and 
> terminating server.
> Occasionally we have seen DFSZKFailOver shutdown, but no exception or no 
> error being logged.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12683) DFSZKFailOverController re-order logic for logging Exception

2017-10-20 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-12683:
--
Status: Patch Available  (was: In Progress)

> DFSZKFailOverController re-order logic for logging Exception
> 
>
> Key: HDFS-12683
> URL: https://issues.apache.org/jira/browse/HDFS-12683
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12683.01.patch, HDFS-12683.02.patch, 
> HDFS-12683.03.patch, HDFS-12683.04.patch, HDFS-12683.05.patch
>
>
> The ZKFC should log fatal exceptions before closing the connections and 
> terminating server.
> Occasionally we have seen DFSZKFailOver shutdown, but no exception or no 
> error being logged.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12683) DFSZKFailOverController re-order logic for logging Exception

2017-10-20 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-12683:
--
Status: In Progress  (was: Patch Available)

> DFSZKFailOverController re-order logic for logging Exception
> 
>
> Key: HDFS-12683
> URL: https://issues.apache.org/jira/browse/HDFS-12683
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12683.01.patch, HDFS-12683.02.patch, 
> HDFS-12683.03.patch, HDFS-12683.04.patch, HDFS-12683.05.patch
>
>
> The ZKFC should log fatal exceptions before closing the connections and 
> terminating server.
> Occasionally we have seen DFSZKFailOver shutdown, but no exception or no 
> error being logged.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12498) Journal Syncer is not started in Federated + HA cluster

2017-10-20 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-12498:
--
Attachment: HDFS-12498.03.patch

> Journal Syncer is not started in Federated + HA cluster
> ---
>
> Key: HDFS-12498
> URL: https://issues.apache.org/jira/browse/HDFS-12498
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12498.01.patch, HDFS-12498.02.patch, 
> HDFS-12498.03.patch, hdfs-site.xml
>
>
> Journal Syncer is not getting started in HDFS + Federated cluster, when 
> dfs.shared.edits.dir.<> is provided, instead of 
> dfs.namenode.shared.edits.dir 
> *Log Snippet:*
> {code:java}
> 2017-09-19 21:42:40,598 WARN 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Could not construct 
> Shared Edits Uri
> 2017-09-19 21:42:40,598 WARN 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Other JournalNode 
> addresses not available. Journal Syncing cannot be done
> 2017-09-19 21:42:40,598 WARN 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Failed to start 
> SyncJournal daemon for journal ns1
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12498) Journal Syncer is not started in Federated + HA cluster

2017-10-20 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-12498:
--
Status: Patch Available  (was: In Progress)

> Journal Syncer is not started in Federated + HA cluster
> ---
>
> Key: HDFS-12498
> URL: https://issues.apache.org/jira/browse/HDFS-12498
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12498.01.patch, HDFS-12498.02.patch, 
> HDFS-12498.03.patch, hdfs-site.xml
>
>
> Journal Syncer is not getting started in HDFS + Federated cluster, when 
> dfs.shared.edits.dir.<> is provided, instead of 
> dfs.namenode.shared.edits.dir 
> *Log Snippet:*
> {code:java}
> 2017-09-19 21:42:40,598 WARN 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Could not construct 
> Shared Edits Uri
> 2017-09-19 21:42:40,598 WARN 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Other JournalNode 
> addresses not available. Journal Syncing cannot be done
> 2017-09-19 21:42:40,598 WARN 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Failed to start 
> SyncJournal daemon for journal ns1
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12620) Backporting HDFS-10467 to branch-2

2017-10-20 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12620:
---
Description: 
When backporting HDFS-10467, there are a few things that changed:
* {{bin\hdfs}}
* {{ClientProtocol}}
* Java 7 not supporting referencing functions
* {{org.eclipse.jetty.util.ajax.JSON}} in branch-2 is 
{{org.mortbay.util.ajax.JSON}}
* {{HashMap#keySet()}} returns a different order in Java 7 and 8 (ported to 
trunk)

  was:
When backporting HDFS-10467, there are a few things that changed:
* {{bin\hdfs}}
* {{ClientProtocol}}
* Java 7 not supporting referencing functions
* {{org.eclipse.jetty.util.ajax.JSON}} in branch-2 is 
{{org.mortbay.util.ajax.JSON}}


> Backporting HDFS-10467 to branch-2
> --
>
> Key: HDFS-12620
> URL: https://issues.apache.org/jira/browse/HDFS-12620
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Attachments: HDFS-10467-branch-2.001.patch, 
> HDFS-10467-branch-2.002.patch, HDFS-10467-branch-2.003.patch, 
> HDFS-10467-branch-2.004.patch, HDFS-10467-branch-2.patch, 
> HDFS-12620-branch-2.000.patch, HDFS-12620-branch-2.004.patch, 
> HDFS-12620-branch-2.005.patch, HDFS-12620-branch-2.006.patch, 
> HDFS-12620-branch-2.007.patch, HDFS-12620-branch-2.008.patch, 
> HDFS-12620-branch-2.009.patch, HDFS-12620-branch-2.010.patch, 
> HDFS-12620-branch-2.011.patch, HDFS-12620-branch-2.012.patch, 
> HDFS-12620.000.patch
>
>
> When backporting HDFS-10467, there are a few things that changed:
> * {{bin\hdfs}}
> * {{ClientProtocol}}
> * Java 7 not supporting referencing functions
> * {{org.eclipse.jetty.util.ajax.JSON}} in branch-2 is 
> {{org.mortbay.util.ajax.JSON}}
> * {{HashMap#keySet()}} returns a different order in Java 7 and 8 (ported to 
> trunk)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12620) Backporting HDFS-10467 to branch-2

2017-10-20 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12620:
---
Description: 
When backporting HDFS-10467, there are a few things that changed:
* {{bin\hdfs}}
* New methods in {{ClientProtocol}}
* Java 7 not supporting referencing functions
* {{org.eclipse.jetty.util.ajax.JSON}} in branch-2 is 
{{org.mortbay.util.ajax.JSON}}
* {{HashMap#keySet()}} returns a different order in Java 7 and 8 (ported to 
trunk)

  was:
When backporting HDFS-10467, there are a few things that changed:
* {{bin\hdfs}}
* {{ClientProtocol}}
* Java 7 not supporting referencing functions
* {{org.eclipse.jetty.util.ajax.JSON}} in branch-2 is 
{{org.mortbay.util.ajax.JSON}}
* {{HashMap#keySet()}} returns a different order in Java 7 and 8 (ported to 
trunk)


> Backporting HDFS-10467 to branch-2
> --
>
> Key: HDFS-12620
> URL: https://issues.apache.org/jira/browse/HDFS-12620
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Attachments: HDFS-10467-branch-2.001.patch, 
> HDFS-10467-branch-2.002.patch, HDFS-10467-branch-2.003.patch, 
> HDFS-10467-branch-2.004.patch, HDFS-10467-branch-2.patch, 
> HDFS-12620-branch-2.000.patch, HDFS-12620-branch-2.004.patch, 
> HDFS-12620-branch-2.005.patch, HDFS-12620-branch-2.006.patch, 
> HDFS-12620-branch-2.007.patch, HDFS-12620-branch-2.008.patch, 
> HDFS-12620-branch-2.009.patch, HDFS-12620-branch-2.010.patch, 
> HDFS-12620-branch-2.011.patch, HDFS-12620-branch-2.012.patch, 
> HDFS-12620.000.patch
>
>
> When backporting HDFS-10467, there are a few things that changed:
> * {{bin\hdfs}}
> * New methods in {{ClientProtocol}}
> * Java 7 not supporting referencing functions
> * {{org.eclipse.jetty.util.ajax.JSON}} in branch-2 is 
> {{org.mortbay.util.ajax.JSON}}
> * {{HashMap#keySet()}} returns a different order in Java 7 and 8 (ported to 
> trunk)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12618) fsck -includeSnapshots reports wrong amount of total blocks

2017-10-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16212858#comment-16212858
 ] 

Hadoop QA commented on HDFS-12618:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 22s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 31s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 91 unchanged - 4 fixed = 92 total (was 95) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 57s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 82m 22s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}119m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:ca8ddc6 |
| JIRA Issue | HDFS-12618 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12893245/HDFS-12618.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux fb6aa782d100 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1f4cdf1 |
| Default Java | 1.8.0_131 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21763/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21763/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21763/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCom

[jira] [Resolved] (HDFS-12688) HDFS File Not Removed Despite Successful "Moved to .Trash" Message

2017-10-20 Thread Shriya Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shriya Gupta resolved HDFS-12688.
-
Resolution: Not A Bug

Another job was recreating the file that the user suspected wasn't being 
deleted.

> HDFS File Not Removed Despite Successful "Moved to .Trash" Message
> --
>
> Key: HDFS-12688
> URL: https://issues.apache.org/jira/browse/HDFS-12688
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.6.0
>Reporter: Shriya Gupta
>Priority: Critical
>
> Wrote a simple script to delete and create a file and ran it multiple times. 
> However, some executions of the script randomly threw a FileAlreadyExists 
> error while the others succeeded despite successful hdfs dfs -rm command. The 
> script is as below, I have reproduced it in two different environments -- 
> hdfs dfs -ls  /user/shriya/shell_test/
> echo "starting hdfs remove **" 
> hdfs dfs -rm -r -f /user/shriya/shell_test/wordcountOutput
>  echo "hdfs compeleted!"
> hdfs dfs -ls  /user/shriya/shell_test/
> echo "starting mapReduce***"
> mapred job -libjars 
> /data/home/shriya/shell_test/hadoop-mapreduce-client-jobclient-2.7.1.jar 
> -submit /data/home/shriya/shell_test/wordcountJob.xml
> The message confirming successful move -- 
> 17/10/19 14:49:12 INFO fs.TrashPolicyDefault: Moved: 
> 'hdfs://nameservice1/user/shriya/shell_test/wordcountOutput' to trash at: 
> hdfs://nameservice1/user/shriya/.Trash/Current/user/shriya/shell_test/wordcountOutput1508438952728
> The contents of subsequent -ls after -rm also showed that the file still 
> existed)
> The error I got when my MapReduce job tried to create the file -- 
> 17/10/19 14:50:00 WARN security.UserGroupInformation: 
> PriviledgedActionException as: (auth:KERBEROS) 
> cause:org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory 
> hdfs://nameservice1/user/shriya/shell_test/wordcountOutput already exists
> Exception in thread "main" 
> org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory 
> hdfs://nameservice1/user/shriya/shell_test/wordcountOutput already exists
> at 
> org.apache.hadoop.mapred.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:131)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:272)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:143)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1307)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1304)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1304)
> at org.apache.hadoop.mapreduce.tools.CLI.run(CLI.java:315)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.mapred.JobClient.main(JobClient.java:1277)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12688) HDFS File Not Removed Despite Successful "Moved to .Trash" Message

2017-10-20 Thread Shriya Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16212839#comment-16212839
 ] 

Shriya Gupta commented on HDFS-12688:
-

You are right! There was another job that was writing to the same file in there 
and thus recreating it. This explain it, thank you!

> HDFS File Not Removed Despite Successful "Moved to .Trash" Message
> --
>
> Key: HDFS-12688
> URL: https://issues.apache.org/jira/browse/HDFS-12688
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.6.0
>Reporter: Shriya Gupta
>Priority: Critical
>
> Wrote a simple script to delete and create a file and ran it multiple times. 
> However, some executions of the script randomly threw a FileAlreadyExists 
> error while the others succeeded despite successful hdfs dfs -rm command. The 
> script is as below, I have reproduced it in two different environments -- 
> hdfs dfs -ls  /user/shriya/shell_test/
> echo "starting hdfs remove **" 
> hdfs dfs -rm -r -f /user/shriya/shell_test/wordcountOutput
>  echo "hdfs compeleted!"
> hdfs dfs -ls  /user/shriya/shell_test/
> echo "starting mapReduce***"
> mapred job -libjars 
> /data/home/shriya/shell_test/hadoop-mapreduce-client-jobclient-2.7.1.jar 
> -submit /data/home/shriya/shell_test/wordcountJob.xml
> The message confirming successful move -- 
> 17/10/19 14:49:12 INFO fs.TrashPolicyDefault: Moved: 
> 'hdfs://nameservice1/user/shriya/shell_test/wordcountOutput' to trash at: 
> hdfs://nameservice1/user/shriya/.Trash/Current/user/shriya/shell_test/wordcountOutput1508438952728
> The contents of subsequent -ls after -rm also showed that the file still 
> existed)
> The error I got when my MapReduce job tried to create the file -- 
> 17/10/19 14:50:00 WARN security.UserGroupInformation: 
> PriviledgedActionException as: (auth:KERBEROS) 
> cause:org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory 
> hdfs://nameservice1/user/shriya/shell_test/wordcountOutput already exists
> Exception in thread "main" 
> org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory 
> hdfs://nameservice1/user/shriya/shell_test/wordcountOutput already exists
> at 
> org.apache.hadoop.mapred.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:131)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:272)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:143)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1307)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1304)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1304)
> at org.apache.hadoop.mapreduce.tools.CLI.run(CLI.java:315)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.mapred.JobClient.main(JobClient.java:1277)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12692) Ozone: fix javadoc/unit test for calling MetadataStore.getRangeKVs with non existing key

2017-10-20 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16212818#comment-16212818
 ] 

Xiaoyu Yao commented on HDFS-12692:
---

Thanks [~elek] for fixing this. Patch looks good to me, +1 pending Jenkins.

> Ozone: fix javadoc/unit test for calling MetadataStore.getRangeKVs with non 
> existing key
> 
>
> Key: HDFS-12692
> URL: https://issues.apache.org/jira/browse/HDFS-12692
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12692-HDFS-7240.001.patch
>
>
> The behaviour of MetadataStore.getRangeKVs is changed with HDFS-12572.  An 
> empty list will be returned instead of an IOException in case of non existing 
> key. 
> But at a few places the javadoc has not been updated.
> This patch fixes the javadoc according to the improved implementation and add 
> an additional test to prove the defined behaviour. (And fixes a small typo in 
> the javadoc of the unit test).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12618) fsck -includeSnapshots reports wrong amount of total blocks

2017-10-20 Thread Wellington Chevreuil (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil updated HDFS-12618:

Attachment: (was: HDFS-121618.001.patch)

> fsck -includeSnapshots reports wrong amount of total blocks
> ---
>
> Key: HDFS-12618
> URL: https://issues.apache.org/jira/browse/HDFS-12618
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0-alpha3
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Minor
> Attachments: HDFS-121618.initial, HDFS-12618.001.patch
>
>
> When snapshot is enabled, if a file is deleted but is contained by a 
> snapshot, *fsck* will not reported blocks for such file, showing different 
> number of *total blocks* than what is exposed in the Web UI. 
> This should be fine, as *fsck* provides *-includeSnapshots* option. The 
> problem is that *-includeSnapshots* option causes *fsck* to count blocks for 
> every occurrence of a file on snapshots, which is wrong because these blocks 
> should be counted only once (for instance, if a 100MB file is present on 3 
> snapshots, it would still map to one block only in hdfs). This causes fsck to 
> report much more blocks than what actually exist in hdfs and is reported in 
> the Web UI.
> Here's an example:
> 1) HDFS has two files of 2 blocks each:
> {noformat}
> $ hdfs dfs -ls -R /
> drwxr-xr-x   - root supergroup  0 2017-10-07 21:21 /snap-test
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:16 /snap-test/file1
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:17 /snap-test/file2
> drwxr-xr-x   - root supergroup  0 2017-05-13 13:03 /test
> {noformat} 
> 2) There are two snapshots, with the two files present on each of the 
> snapshots:
> {noformat}
> $ hdfs dfs -ls -R /snap-test/.snapshot
> drwxr-xr-x   - root supergroup  0 2017-10-07 21:21 
> /snap-test/.snapshot/snap1
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:16 
> /snap-test/.snapshot/snap1/file1
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:17 
> /snap-test/.snapshot/snap1/file2
> drwxr-xr-x   - root supergroup  0 2017-10-07 21:21 
> /snap-test/.snapshot/snap2
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:16 
> /snap-test/.snapshot/snap2/file1
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:17 
> /snap-test/.snapshot/snap2/file2
> {noformat}
> 3) *fsck -includeSnapshots* reports 12 blocks in total (4 blocks for the 
> normal file path, plus 4 blocks for each snapshot path):
> {noformat}
> $ hdfs fsck / -includeSnapshots
> FSCK started by root (auth:SIMPLE) from /127.0.0.1 for path / at Mon Oct 09 
> 15:15:36 BST 2017
> Status: HEALTHY
>  Number of data-nodes:1
>  Number of racks: 1
>  Total dirs:  6
>  Total symlinks:  0
> Replicated Blocks:
>  Total size:  1258291200 B
>  Total files: 6
>  Total blocks (validated):12 (avg. block size 104857600 B)
>  Minimally replicated blocks: 12 (100.0 %)
>  Over-replicated blocks:  0 (0.0 %)
>  Under-replicated blocks: 0 (0.0 %)
>  Mis-replicated blocks:   0 (0.0 %)
>  Default replication factor:  1
>  Average block replication:   1.0
>  Missing blocks:  0
>  Corrupt blocks:  0
>  Missing replicas:0 (0.0 %)
> {noformat}
> 4) Web UI shows the correct number (4 blocks only):
> {noformat}
> Security is off.
> Safemode is off.
> 5 files and directories, 4 blocks = 9 total filesystem object(s).
> {noformat}
> I would like to work on this solution, will propose an initial solution 
> shortly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12621) Inconsistency/confusion around ViewFileSystem.getDelagation

2017-10-20 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16212782#comment-16212782
 ] 

Erik Krogen commented on HDFS-12621:


Hm... There is some reference in the design document for HDFS-10467, but not a 
lot of detail. As-is it actually will not work since it relies on an external 
state store. This is what I had in mind for ViewFS, loosely based off of their 
ideas:
* Upon a call to {{ViewFileSystem#getDelegationToken()}}, make a call to each 
underlying Namenode. This is essentially the same as the behavior of  
{{addDelegationToken()}}.
* Since {{getDelegationToken()}} can only store one delegation token, we need a 
way to stuff all of the other tokens inside of it. Create a new 
{{ViewFSTokenIdentifier extends TokenIdentifier}} which will stores all of them.
* When a {{ViewFileSystem}} method is called under a {{UserGroupInformation}} 
which is authenticated via a {{ViewFSTokenIdentifier}}, extract the underlying 
Token and send it along to the corresponding NN.
* To serialize {{ViewFSTokenIdentifier}} for later use by other clients, we 
leverage the fact that all of the underlying Tokens are also serializable 
({{Writable}}). The sequence of bytes that {{ViewFSTokenIdentifier}} serializes 
to can simply be a concatenation of the serialized bytes for each underlying 
Token, along with some information about which Token goes with which mount 
point and/or NN URI. This should enable, after deserialization, any other 
client to perform the same steps, assuming that it is using the same ViewFS 
mount table.

It would require a fair amount of effort / code change, but all isolated to 
within {{ViewFileSystem}} code.

> Inconsistency/confusion around ViewFileSystem.getDelagation 
> 
>
> Key: HDFS-12621
> URL: https://issues.apache.org/jira/browse/HDFS-12621
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Mohammad Kamrul Islam
>Assignee: Mohammad Kamrul Islam
>
> *Symptom*: 
> When a user invokes ViewFileSystem.getDelegationToken(String renewer), she 
> gets a "null". However, for any other file system, it returns a valid 
> delegation token. For a normal user, it is very confusing and it takes 
> substantial time to debug/find out an alternative.
> *Root Cause:*
>  ViewFileSystem inherits the basic implementation from 
> FileSystem.getDelegationToken() that returns "_null_". The comments in the 
> source code indicates not to use it and instead use addDelegationTokens(). 
> However, it works fine DistributedFileSystem. 
> In short, the same client call is working for hdfs:// but not for  viewfs://. 
> And there is no way of end-user to identify the root cause. This also creates 
> a lot of confusion for any service that are supposed to work for both viewfs 
> and hdfs.
> *Possible Solution*:
> _Option 1:_ Add  a LOG.warn() with reasons/alternative before returning 
> "null" in the base class.
> _Option 2:_ As done for other FS, ViewFileSystem can override the method with 
> a implementation by returning the token related to fs.defaultFS. In this 
> case, the defaultFS is something like "viewfs://..". We need to find out the 
> actual namenode and uses that to retrieve the delegation token.
> _Option 3:_ Open for suggestion ?
> *Last note:* My hunch is : there are very few users who may be using 
> viewfs:// with Kerberos. Therefore, it was not being exposed earlier.
> I'm working on a good solution. Please add your suggestion.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12591) [READ] Implement LevelDBFileRegionFormat

2017-10-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16212735#comment-16212735
 ] 

Hadoop QA commented on HDFS-12591:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  2m 
55s{color} | {color:red} Docker failed to build yetus/hadoop:71bbb86. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-12591 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12893269/HDFS-12591-HDFS-9806.003.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21761/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [READ] Implement LevelDBFileRegionFormat
> 
>
> Key: HDFS-12591
> URL: https://issues.apache.org/jira/browse/HDFS-12591
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Minor
> Attachments: HDFS-12591-HDFS-9806.001.patch, 
> HDFS-12591-HDFS-9806.002.patch, HDFS-12591-HDFS-9806.003.patch
>
>
> The existing work for HDFS-9806 uses an implementation of the {{FileRegion}} 
> read from a csv file. This is good for testability and diagnostic purposes, 
> but it is not very efficient for larger systems.
> There should be a version that is similar to the {{TextFileRegionFormat}} 
> that instead uses LevelDB.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12588) Use GenericOptionsParser for scm and ksm daemon

2017-10-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16212732#comment-16212732
 ] 

Hadoop QA commented on HDFS-12588:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  6m  
1s{color} | {color:red} Docker failed to build yetus/hadoop:71bbb86. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-12588 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12893268/HDFS-12588-HDFS-7240.004.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21759/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Use GenericOptionsParser for scm and ksm daemon
> ---
>
> Key: HDFS-12588
> URL: https://issues.apache.org/jira/browse/HDFS-12588
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12588-HDFS-7240.001.patch, 
> HDFS-12588-HDFS-7240.002.patch, HDFS-12588-HDFS-7240.003.patch, 
> HDFS-12588-HDFS-7240.004.patch
>
>
> Most of the hadoop commands use the GenericOptionsParser to use some common 
> CLI arguments (such as -conf or -D or -libjars to define configuration/modify 
> configuration/modify classpath).
> I suggest to use the same common options to scm and ksm daemons as well, as:
> 1. It allows to use the existing cluster management tools/scripts as the 
> daemons could be configured in the same way as namenode and datanode
> 2. It follows the convention from the hadoop common.
> 3. It's easier to develop from the IDE (I start the ksm/scm/datanode/namenode 
> from intellij but I need to add the configuration to the classpath. With 
> -conf I would able to use external configration.)
> I found one problem during the implementation. Until now we used `hdfs scm` 
> command both for the daemon and the scm command line client. If there were no 
> parameters the daemon is started, with parameters the cli is started. The 
> help listed only the damon.
> The -conf (GenericOptionParser) could be used only if we separate the scm and 
> scmcli commands. But any way, it's a more clean and visible if we have 
> separated `hdfs scm` and `hdfs scmcli`.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12690) Ozone: generate swagger descriptor for the Ozone REST Api

2017-10-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16212728#comment-16212728
 ] 

Hadoop QA commented on HDFS-12690:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  0m 
12s{color} | {color:red} Docker failed to build yetus/hadoop:71bbb86. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-12690 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12893224/HDFS-12690-HDFS-7240.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21764/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: generate swagger descriptor for the Ozone REST Api
> -
>
> Key: HDFS-12690
> URL: https://issues.apache.org/jira/browse/HDFS-12690
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12690-HDFS-7240.001.patch
>
>
> This patch generates ozone.swagger.json descriptor during compilation time 
> and adds it to the static/ web folder (available both from all web ui).
> Note: I tested multiple method to generate swagger file: runtime and 
> compilation time. Runtime needs more additional dependencies and multiple 
> workaround as we have no ServletContext and ServletConfig in the netty based 
> adapter (but it's possible to add a stub one). I prefer the compilation time 
> generation because it more simplified and it also could be used to generate 
> additional documentation.
> This patch contains only the basic @Api/@ApiMethod annotations the parameters 
> not yet annotated. Followup tasks:
>  *  We can check how can we generate parameter level description from the 
> javadoc. It is possible with custom docket + custom swagger reader, but maybe 
> doesn't worth it.
>  * We can add a swagger ui (there are many implementations). It's a licence 
> nightmare as most of the ui contains unlimited number of npm dependency with 
> different (but mostly Apache + MIT) artifacts. I will suggest to add a 
> swagger ui without the javascript and load it from cdn. It will work only 
> with active internet connection but without licencing issue.
>  * Long term with this plugin we can also generate the content of 
> OzoneRest.md (after a fine tuning the swagger annotations) 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12665) [AliasMap] Create a version of the AliasMap that runs in memory in the Namenode (leveldb)

2017-10-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16212727#comment-16212727
 ] 

Hadoop QA commented on HDFS-12665:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HDFS-12665 does not apply to HDFS-9806. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-12665 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12893220/HDFS-12665-HDFS-9806.002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21762/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [AliasMap] Create a version of the AliasMap that runs in memory in the 
> Namenode (leveldb)
> -
>
> Key: HDFS-12665
> URL: https://issues.apache.org/jira/browse/HDFS-12665
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
> Attachments: HDFS-12665-HDFS-9806.001.patch, 
> HDFS-12665-HDFS-9806.002.patch
>
>
> The design of Provided Storage requires the use of an AliasMap to manage the 
> mapping between blocks of files on the local HDFS and ranges of files on a 
> remote storage system. To reduce load from the Namenode, this can be done 
> using a pluggable external service (e.g. AzureTable, Cassandra, Ratis). 
> However, to aide adoption and ease of deployment, we propose an in memory 
> version.
> This AliasMap will be a wrapper around LevelDB (already a dependency from the 
> Timeline Service) and use protobuf for the key (blockpool, blockid, and 
> genstamp) and the value (url, offset, length, nonce). The in memory service 
> will also have a configurable port on which it will listen for updates from 
> Storage Policy Satisfier (SPS) Coordinating Datanodes (C-DN).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7060) Avoid taking locks when sending heartbeats from the DataNode

2017-10-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16212722#comment-16212722
 ] 

Hadoop QA commented on HDFS-7060:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} HDFS-7060 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-7060 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12708051/HDFS-7060-002.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21760/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Avoid taking locks when sending heartbeats from the DataNode
> 
>
> Key: HDFS-7060
> URL: https://issues.apache.org/jira/browse/HDFS-7060
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Xinwei Qin 
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7060-002.patch, HDFS-7060.000.patch, 
> HDFS-7060.001.patch
>
>
> We're seeing the heartbeat is blocked by the monitor of {{FsDatasetImpl}} 
> when the DN is under heavy load of writes:
> {noformat}
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getDfsUsed(FsVolumeImpl.java:115)
> - waiting to lock <0x000780304fb8> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getStorageReports(FsDatasetImpl.java:91)
> - locked <0x000780612fd8> (a java.lang.Object)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:563)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:668)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:827)
> at java.lang.Thread.run(Thread.java:744)
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:743)
> - waiting to lock <0x000780304fb8> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:60)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:169)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:621)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
> at java.lang.Thread.run(Thread.java:744)
>java.lang.Thread.State: RUNNABLE
> at java.io.UnixFileSystem.createFileExclusively(Native Method)
> at java.io.File.createNewFile(File.java:1006)
> at 
> org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createTmpFile(DatanodeUtil.java:59)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:244)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:195)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:753)
> - locked <0x000780304fb8> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:60)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:169)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:621)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
> at java.lang.Thread.run(Thread.java:744)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---

[jira] [Commented] (HDFS-12692) Ozone: fix javadoc/unit test for calling MetadataStore.getRangeKVs with non existing key

2017-10-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16212724#comment-16212724
 ] 

Hadoop QA commented on HDFS-12692:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  3m 
34s{color} | {color:red} Docker failed to build yetus/hadoop:71bbb86. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-12692 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12893250/HDFS-12692-HDFS-7240.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21758/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: fix javadoc/unit test for calling MetadataStore.getRangeKVs with non 
> existing key
> 
>
> Key: HDFS-12692
> URL: https://issues.apache.org/jira/browse/HDFS-12692
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12692-HDFS-7240.001.patch
>
>
> The behaviour of MetadataStore.getRangeKVs is changed with HDFS-12572.  An 
> empty list will be returned instead of an IOException in case of non existing 
> key. 
> But at a few places the javadoc has not been updated.
> This patch fixes the javadoc according to the improved implementation and add 
> an additional test to prove the defined behaviour. (And fixes a small typo in 
> the javadoc of the unit test).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12591) [READ] Implement LevelDBFileRegionFormat

2017-10-20 Thread Ewan Higgs (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16212698#comment-16212698
 ] 

Ewan Higgs commented on HDFS-12591:
---

This now depends on HDFS-12665 since it uses the same code for reading/writing 
to leveldb as that ticket.

> [READ] Implement LevelDBFileRegionFormat
> 
>
> Key: HDFS-12591
> URL: https://issues.apache.org/jira/browse/HDFS-12591
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Minor
> Attachments: HDFS-12591-HDFS-9806.001.patch, 
> HDFS-12591-HDFS-9806.002.patch, HDFS-12591-HDFS-9806.003.patch
>
>
> The existing work for HDFS-9806 uses an implementation of the {{FileRegion}} 
> read from a csv file. This is good for testability and diagnostic purposes, 
> but it is not very efficient for larger systems.
> There should be a version that is similar to the {{TextFileRegionFormat}} 
> that instead uses LevelDB.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12591) [READ] Implement LevelDBFileRegionFormat

2017-10-20 Thread Ewan Higgs (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HDFS-12591:
--
Attachment: HDFS-12591-HDFS-9806.003.patch

Attaching a patch rebased on HDFS-12665 so the two LevelDB alias maps (file 
based and NN in-memory) use the same code for reading and writing data.

> [READ] Implement LevelDBFileRegionFormat
> 
>
> Key: HDFS-12591
> URL: https://issues.apache.org/jira/browse/HDFS-12591
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Minor
> Attachments: HDFS-12591-HDFS-9806.001.patch, 
> HDFS-12591-HDFS-9806.002.patch, HDFS-12591-HDFS-9806.003.patch
>
>
> The existing work for HDFS-9806 uses an implementation of the {{FileRegion}} 
> read from a csv file. This is good for testability and diagnostic purposes, 
> but it is not very efficient for larger systems.
> There should be a version that is similar to the {{TextFileRegionFormat}} 
> that instead uses LevelDB.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12588) Use GenericOptionsParser for scm and ksm daemon

2017-10-20 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12588:

Attachment: HDFS-12588-HDFS-7240.004.patch

Thx [~cheersyang] the feedback.

1. You are right, I moved the startupShutdownMessages after the arguments.

2. It's more tricky. There are 3 mapreduce related argument (files,archives, 
tokensFile) and one yarn related (Resourcemanager) which doesn't matter for the 
hdfs/ozone components. But that was the current practice in the Hadoop source 
code, they were just ignored. But I agree with you and I would be happy to fix 
it, with refactoring the GenericOptionParse and create some builder pattern, 
but I suggest to do it in a different jira as it requires a more wide 
(non-ozone related) patchset (Datanode, Namenode also should be fixed and maybe 
other hdfs components).

> Use GenericOptionsParser for scm and ksm daemon
> ---
>
> Key: HDFS-12588
> URL: https://issues.apache.org/jira/browse/HDFS-12588
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12588-HDFS-7240.001.patch, 
> HDFS-12588-HDFS-7240.002.patch, HDFS-12588-HDFS-7240.003.patch, 
> HDFS-12588-HDFS-7240.004.patch
>
>
> Most of the hadoop commands use the GenericOptionsParser to use some common 
> CLI arguments (such as -conf or -D or -libjars to define configuration/modify 
> configuration/modify classpath).
> I suggest to use the same common options to scm and ksm daemons as well, as:
> 1. It allows to use the existing cluster management tools/scripts as the 
> daemons could be configured in the same way as namenode and datanode
> 2. It follows the convention from the hadoop common.
> 3. It's easier to develop from the IDE (I start the ksm/scm/datanode/namenode 
> from intellij but I need to add the configuration to the classpath. With 
> -conf I would able to use external configration.)
> I found one problem during the implementation. Until now we used `hdfs scm` 
> command both for the daemon and the scm command line client. If there were no 
> parameters the daemon is started, with parameters the cli is started. The 
> help listed only the damon.
> The -conf (GenericOptionParser) could be used only if we separate the scm and 
> scmcli commands. But any way, it's a more clean and visible if we have 
> separated `hdfs scm` and `hdfs scmcli`.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12688) HDFS File Not Removed Despite Successful "Moved to .Trash" Message

2017-10-20 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16212654#comment-16212654
 ] 

Jason Lowe commented on HDFS-12688:
---

Then there is very likely some other job or async behavior that is re-creating 
the directory.  Please examine the HDFS audit logs.  You should see why the 
directory is getting re-created after delete there and which node is doing it.  
That will likely pinpoint exactly how this is occurring.



> HDFS File Not Removed Despite Successful "Moved to .Trash" Message
> --
>
> Key: HDFS-12688
> URL: https://issues.apache.org/jira/browse/HDFS-12688
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.6.0
>Reporter: Shriya Gupta
>Priority: Critical
>
> Wrote a simple script to delete and create a file and ran it multiple times. 
> However, some executions of the script randomly threw a FileAlreadyExists 
> error while the others succeeded despite successful hdfs dfs -rm command. The 
> script is as below, I have reproduced it in two different environments -- 
> hdfs dfs -ls  /user/shriya/shell_test/
> echo "starting hdfs remove **" 
> hdfs dfs -rm -r -f /user/shriya/shell_test/wordcountOutput
>  echo "hdfs compeleted!"
> hdfs dfs -ls  /user/shriya/shell_test/
> echo "starting mapReduce***"
> mapred job -libjars 
> /data/home/shriya/shell_test/hadoop-mapreduce-client-jobclient-2.7.1.jar 
> -submit /data/home/shriya/shell_test/wordcountJob.xml
> The message confirming successful move -- 
> 17/10/19 14:49:12 INFO fs.TrashPolicyDefault: Moved: 
> 'hdfs://nameservice1/user/shriya/shell_test/wordcountOutput' to trash at: 
> hdfs://nameservice1/user/shriya/.Trash/Current/user/shriya/shell_test/wordcountOutput1508438952728
> The contents of subsequent -ls after -rm also showed that the file still 
> existed)
> The error I got when my MapReduce job tried to create the file -- 
> 17/10/19 14:50:00 WARN security.UserGroupInformation: 
> PriviledgedActionException as: (auth:KERBEROS) 
> cause:org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory 
> hdfs://nameservice1/user/shriya/shell_test/wordcountOutput already exists
> Exception in thread "main" 
> org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory 
> hdfs://nameservice1/user/shriya/shell_test/wordcountOutput already exists
> at 
> org.apache.hadoop.mapred.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:131)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:272)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:143)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1307)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1304)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1304)
> at org.apache.hadoop.mapreduce.tools.CLI.run(CLI.java:315)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.mapred.JobClient.main(JobClient.java:1277)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12691) Ozone: Decrease interval time of SCMBlockDeletingService for improving the efficiency

2017-10-20 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16212652#comment-16212652
 ] 

Weiwei Yang commented on HDFS-12691:


Hi [~linyiqun]

Thanks for filing this. Well I agree this is definitely a place to improve, but 
the thing I am not sure what is a better value for the interval. I think we 
need some scale testing to understand where the bottleneck is. I generally 
agree that we can let {{KeyDeletingService}} and {{SCMBlockDeletingService}} to 
run on their own interval (run faster than 1minute), but if we get them 
configurable it will be too tricky for user to set a proper value. I tried to 
test this part before, but with single client, I could not find out the 
bottleneck. The test we need is lots of nodes, and lots of concurrent deletes 
on these nodes. Then we'll see how this performs. But at present, I don't have 
a solid idea how to improve this. Any good idea to share? Thanks

> Ozone: Decrease interval time of SCMBlockDeletingService for improving the 
> efficiency
> -
>
> Key: HDFS-12691
> URL: https://issues.apache.org/jira/browse/HDFS-12691
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, performance
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>
> After logging elapsed time of each block deletion task, found some places we 
> can make some improvements after testing. The logging during testing:
> {noformat}
> 2017-10-20 17:02:55,168 [Datanode State Machine Thread - 0] INFO  
> Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
> for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
> 2017-10-20 17:02:56,169 [Datanode State Machine Thread - 0] INFO  
> Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
> for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
> 2017-10-20 17:02:56,451 [SCMBlockDeletingService#0] INFO  
> utils.BackgroundService (BackgroundService.java:run(99)) - Running background 
> service : SCMBlockDeletingService
> 2017-10-20 17:02:56,755 [KeyDeletingService#0] INFO  utils.BackgroundService 
> (BackgroundService.java:run(99)) - Running background service : 
> KeyDeletingService
> 2017-10-20 17:02:56,758 [KeyDeletingService#1] INFO  ksm.KeyDeletingService 
> (KeyDeletingService.java:call(99))  - Found 11 to-delete keys in KSM
> 2017-10-20 17:02:56,775 [IPC Server handler 19 on 52342] INFO  
> scm.StorageContainerManager 
> (StorageContainerManager.java:deleteKeyBlocks(870))  - SCM is informed by 
> KSM to delete 11 blocks
> 2017-10-20 17:02:57,182 [Datanode State Machine Thread - 0] INFO  
> Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
> for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
> 2017-10-20 17:02:57,640 [KeyDeletingService#1] INFO  ksm.KeyDeletingService 
> (KeyDeletingService.java:call(125))  - Number of key deleted from KSM DB: 
> 11, task elapsed time: 885ms
> 2017-10-20 17:02:58,168 [Datanode State Machine Thread - 0] INFO  
> Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
> for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
> 2017-10-20 17:03:03,178 [Datanode State Machine Thread - 0] INFO  
> Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
> for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
> ...
> 2017-10-20 17:03:04,167 [Datanode State Machine Thread - 0] INFO  
> Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
> for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
> 2017-10-20 17:03:05,173 [Datanode State Machine Thread - 0] INFO  
> Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
> for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
> 2017-10-20 17:03:06,095 [BlockDeletingService#0] INFO  
> utils.BackgroundService (BackgroundService.java:run(99)) - Running background 
> service : BlockDeletingService
> 2017-10-20 17:03:06,095 [BlockDeletingService#0] INFO  
> background.BlockDeletingService (BlockDeletingService.java:getTasks(109)) 
>  - Plan to choose 10 containers for block deletion, actually returns 0 valid 
> containers.
> 2017-10-20 17:03:06,171 [Datanode State Machine Thread - 0] INFO  
> Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
> for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
> ...
> 2017-10-20 17:03:54,279 [Datanode State Machine Thread - 0] INFO  
> Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
> for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
> 2017-10-20 17:03:55,267 [Datanode State Machine Thread - 0] INFO  
> Config

[jira] [Updated] (HDFS-12692) Ozone: fix javadoc/unit test for calling MetadataStore.getRangeKVs with non existing key

2017-10-20 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12692:

Status: Patch Available  (was: Open)

> Ozone: fix javadoc/unit test for calling MetadataStore.getRangeKVs with non 
> existing key
> 
>
> Key: HDFS-12692
> URL: https://issues.apache.org/jira/browse/HDFS-12692
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12692-HDFS-7240.001.patch
>
>
> The behaviour of MetadataStore.getRangeKVs is changed with HDFS-12572.  An 
> empty list will be returned instead of an IOException in case of non existing 
> key. 
> But at a few places the javadoc has not been updated.
> This patch fixes the javadoc according to the improved implementation and add 
> an additional test to prove the defined behaviour. (And fixes a small typo in 
> the javadoc of the unit test).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12692) Ozone: fix javadoc/unit test for calling MetadataStore.getRangeKVs with non existing key

2017-10-20 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12692:

Attachment: HDFS-12692-HDFS-7240.001.patch

> Ozone: fix javadoc/unit test for calling MetadataStore.getRangeKVs with non 
> existing key
> 
>
> Key: HDFS-12692
> URL: https://issues.apache.org/jira/browse/HDFS-12692
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12692-HDFS-7240.001.patch
>
>
> The behaviour of MetadataStore.getRangeKVs is changed with HDFS-12572.  An 
> empty list will be returned instead of an IOException in case of non existing 
> key. 
> But at a few places the javadoc has not been updated.
> This patch fixes the javadoc according to the improved implementation and add 
> an additional test to prove the defined behaviour. (And fixes a small typo in 
> the javadoc of the unit test).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12692) Ozone: fix javadoc/unit test for calling MetadataStore.getRangeKVs with non existing key

2017-10-20 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton reassigned HDFS-12692:
---

Assignee: Elek, Marton

> Ozone: fix javadoc/unit test for calling MetadataStore.getRangeKVs with non 
> existing key
> 
>
> Key: HDFS-12692
> URL: https://issues.apache.org/jira/browse/HDFS-12692
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>
> The behaviour of MetadataStore.getRangeKVs is changed with HDFS-12572.  An 
> empty list will be returned instead of an IOException in case of non existing 
> key. 
> But at a few places the javadoc has not been updated.
> This patch fixes the javadoc according to the improved implementation and add 
> an additional test to prove the defined behaviour. (And fixes a small typo in 
> the javadoc of the unit test).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12692) Ozone: fix javadoc/unit test for calling MetadataStore.getRangeKVs with non existing key

2017-10-20 Thread Elek, Marton (JIRA)
Elek, Marton created HDFS-12692:
---

 Summary: Ozone: fix javadoc/unit test for calling 
MetadataStore.getRangeKVs with non existing key
 Key: HDFS-12692
 URL: https://issues.apache.org/jira/browse/HDFS-12692
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Elek, Marton


The behaviour of MetadataStore.getRangeKVs is changed with HDFS-12572.  An 
empty list will be returned instead of an IOException in case of non existing 
key. 

But at a few places the javadoc has not been updated.

This patch fixes the javadoc according to the improved implementation and add 
an additional test to prove the defined behaviour. (And fixes a small typo in 
the javadoc of the unit test).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12618) fsck -includeSnapshots reports wrong amount of total blocks

2017-10-20 Thread Wellington Chevreuil (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil updated HDFS-12618:

Attachment: HDFS-12618.001.patch

Attaching a new patch with checkstyle issues fixed. Also had fixed patch name, 
previously submitted patch had a typo on the jira id.

Last test failure seems unrelated, as the same is passing locally.

> fsck -includeSnapshots reports wrong amount of total blocks
> ---
>
> Key: HDFS-12618
> URL: https://issues.apache.org/jira/browse/HDFS-12618
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0-alpha3
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Minor
> Attachments: HDFS-121618.001.patch, HDFS-121618.initial, 
> HDFS-12618.001.patch
>
>
> When snapshot is enabled, if a file is deleted but is contained by a 
> snapshot, *fsck* will not reported blocks for such file, showing different 
> number of *total blocks* than what is exposed in the Web UI. 
> This should be fine, as *fsck* provides *-includeSnapshots* option. The 
> problem is that *-includeSnapshots* option causes *fsck* to count blocks for 
> every occurrence of a file on snapshots, which is wrong because these blocks 
> should be counted only once (for instance, if a 100MB file is present on 3 
> snapshots, it would still map to one block only in hdfs). This causes fsck to 
> report much more blocks than what actually exist in hdfs and is reported in 
> the Web UI.
> Here's an example:
> 1) HDFS has two files of 2 blocks each:
> {noformat}
> $ hdfs dfs -ls -R /
> drwxr-xr-x   - root supergroup  0 2017-10-07 21:21 /snap-test
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:16 /snap-test/file1
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:17 /snap-test/file2
> drwxr-xr-x   - root supergroup  0 2017-05-13 13:03 /test
> {noformat} 
> 2) There are two snapshots, with the two files present on each of the 
> snapshots:
> {noformat}
> $ hdfs dfs -ls -R /snap-test/.snapshot
> drwxr-xr-x   - root supergroup  0 2017-10-07 21:21 
> /snap-test/.snapshot/snap1
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:16 
> /snap-test/.snapshot/snap1/file1
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:17 
> /snap-test/.snapshot/snap1/file2
> drwxr-xr-x   - root supergroup  0 2017-10-07 21:21 
> /snap-test/.snapshot/snap2
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:16 
> /snap-test/.snapshot/snap2/file1
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:17 
> /snap-test/.snapshot/snap2/file2
> {noformat}
> 3) *fsck -includeSnapshots* reports 12 blocks in total (4 blocks for the 
> normal file path, plus 4 blocks for each snapshot path):
> {noformat}
> $ hdfs fsck / -includeSnapshots
> FSCK started by root (auth:SIMPLE) from /127.0.0.1 for path / at Mon Oct 09 
> 15:15:36 BST 2017
> Status: HEALTHY
>  Number of data-nodes:1
>  Number of racks: 1
>  Total dirs:  6
>  Total symlinks:  0
> Replicated Blocks:
>  Total size:  1258291200 B
>  Total files: 6
>  Total blocks (validated):12 (avg. block size 104857600 B)
>  Minimally replicated blocks: 12 (100.0 %)
>  Over-replicated blocks:  0 (0.0 %)
>  Under-replicated blocks: 0 (0.0 %)
>  Mis-replicated blocks:   0 (0.0 %)
>  Default replication factor:  1
>  Average block replication:   1.0
>  Missing blocks:  0
>  Corrupt blocks:  0
>  Missing replicas:0 (0.0 %)
> {noformat}
> 4) Web UI shows the correct number (4 blocks only):
> {noformat}
> Security is off.
> Safemode is off.
> 5 files and directories, 4 blocks = 9 total filesystem object(s).
> {noformat}
> I would like to work on this solution, will propose an initial solution 
> shortly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12691) Ozone: Decrease interval time of SCMBlockDeletingService for improving the efficiency

2017-10-20 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16212500#comment-16212500
 ] 

Yiqun Lin commented on HDFS-12691:
--

Softly pinging [~cheersyang], you may be interested in this, :).

> Ozone: Decrease interval time of SCMBlockDeletingService for improving the 
> efficiency
> -
>
> Key: HDFS-12691
> URL: https://issues.apache.org/jira/browse/HDFS-12691
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, performance
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>
> After logging elapsed time of each block deletion task, found some places we 
> can make some improvements after testing. The logging during testing:
> {noformat}
> 2017-10-20 17:02:55,168 [Datanode State Machine Thread - 0] INFO  
> Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
> for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
> 2017-10-20 17:02:56,169 [Datanode State Machine Thread - 0] INFO  
> Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
> for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
> 2017-10-20 17:02:56,451 [SCMBlockDeletingService#0] INFO  
> utils.BackgroundService (BackgroundService.java:run(99)) - Running background 
> service : SCMBlockDeletingService
> 2017-10-20 17:02:56,755 [KeyDeletingService#0] INFO  utils.BackgroundService 
> (BackgroundService.java:run(99)) - Running background service : 
> KeyDeletingService
> 2017-10-20 17:02:56,758 [KeyDeletingService#1] INFO  ksm.KeyDeletingService 
> (KeyDeletingService.java:call(99))  - Found 11 to-delete keys in KSM
> 2017-10-20 17:02:56,775 [IPC Server handler 19 on 52342] INFO  
> scm.StorageContainerManager 
> (StorageContainerManager.java:deleteKeyBlocks(870))  - SCM is informed by 
> KSM to delete 11 blocks
> 2017-10-20 17:02:57,182 [Datanode State Machine Thread - 0] INFO  
> Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
> for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
> 2017-10-20 17:02:57,640 [KeyDeletingService#1] INFO  ksm.KeyDeletingService 
> (KeyDeletingService.java:call(125))  - Number of key deleted from KSM DB: 
> 11, task elapsed time: 885ms
> 2017-10-20 17:02:58,168 [Datanode State Machine Thread - 0] INFO  
> Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
> for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
> 2017-10-20 17:03:03,178 [Datanode State Machine Thread - 0] INFO  
> Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
> for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
> ...
> 2017-10-20 17:03:04,167 [Datanode State Machine Thread - 0] INFO  
> Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
> for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
> 2017-10-20 17:03:05,173 [Datanode State Machine Thread - 0] INFO  
> Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
> for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
> 2017-10-20 17:03:06,095 [BlockDeletingService#0] INFO  
> utils.BackgroundService (BackgroundService.java:run(99)) - Running background 
> service : BlockDeletingService
> 2017-10-20 17:03:06,095 [BlockDeletingService#0] INFO  
> background.BlockDeletingService (BlockDeletingService.java:getTasks(109)) 
>  - Plan to choose 10 containers for block deletion, actually returns 0 valid 
> containers.
> 2017-10-20 17:03:06,171 [Datanode State Machine Thread - 0] INFO  
> Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
> for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
> ...
> 2017-10-20 17:03:54,279 [Datanode State Machine Thread - 0] INFO  
> Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
> for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
> 2017-10-20 17:03:55,267 [Datanode State Machine Thread - 0] INFO  
> Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
> for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
> 2017-10-20 17:03:56,282 [Datanode State Machine Thread - 0] INFO  
> Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
> for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
> 2017-10-20 17:03:56,461 [SCMBlockDeletingService#0] INFO  
> utils.BackgroundService (BackgroundService.java:run(99)) - Running background 
> service : SCMBlockDeletingService
> 2017-10-20 17:03:56,467 [SCMBlockDeletingService#1] INFO  
> block.SCMBlockDeletingService (SCMBlockDeletingService.java:call(129))  - 
> Totally added 11 delete blocks command for 1 dat

[jira] [Updated] (HDFS-12691) Ozone: Decrease interval time of SCMBlockDeletingService for improving the efficiency

2017-10-20 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12691:
-
Description: 
After logging elapsed time of each block deletion task, found some places we 
can make some improvements after testing. The logging during testing:
{noformat}
2017-10-20 17:02:55,168 [Datanode State Machine Thread - 0] INFO  
Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
2017-10-20 17:02:56,169 [Datanode State Machine Thread - 0] INFO  
Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
2017-10-20 17:02:56,451 [SCMBlockDeletingService#0] INFO  
utils.BackgroundService (BackgroundService.java:run(99)) - Running background 
service : SCMBlockDeletingService
2017-10-20 17:02:56,755 [KeyDeletingService#0] INFO  utils.BackgroundService 
(BackgroundService.java:run(99)) - Running background service : 
KeyDeletingService
2017-10-20 17:02:56,758 [KeyDeletingService#1] INFO  ksm.KeyDeletingService 
(KeyDeletingService.java:call(99))  - Found 11 to-delete keys in KSM
2017-10-20 17:02:56,775 [IPC Server handler 19 on 52342] INFO  
scm.StorageContainerManager (StorageContainerManager.java:deleteKeyBlocks(870)) 
 - SCM is informed by KSM to delete 11 blocks
2017-10-20 17:02:57,182 [Datanode State Machine Thread - 0] INFO  
Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
2017-10-20 17:02:57,640 [KeyDeletingService#1] INFO  ksm.KeyDeletingService 
(KeyDeletingService.java:call(125))  - Number of key deleted from KSM DB: 
11, task elapsed time: 885ms
2017-10-20 17:02:58,168 [Datanode State Machine Thread - 0] INFO  
Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
2017-10-20 17:03:03,178 [Datanode State Machine Thread - 0] INFO  
Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
...
2017-10-20 17:03:04,167 [Datanode State Machine Thread - 0] INFO  
Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
2017-10-20 17:03:05,173 [Datanode State Machine Thread - 0] INFO  
Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
2017-10-20 17:03:06,095 [BlockDeletingService#0] INFO  utils.BackgroundService 
(BackgroundService.java:run(99)) - Running background service : 
BlockDeletingService
2017-10-20 17:03:06,095 [BlockDeletingService#0] INFO  
background.BlockDeletingService (BlockDeletingService.java:getTasks(109))  
- Plan to choose 10 containers for block deletion, actually returns 0 valid 
containers.
2017-10-20 17:03:06,171 [Datanode State Machine Thread - 0] INFO  
Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
...
2017-10-20 17:03:54,279 [Datanode State Machine Thread - 0] INFO  
Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
2017-10-20 17:03:55,267 [Datanode State Machine Thread - 0] INFO  
Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
2017-10-20 17:03:56,282 [Datanode State Machine Thread - 0] INFO  
Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
2017-10-20 17:03:56,461 [SCMBlockDeletingService#0] INFO  
utils.BackgroundService (BackgroundService.java:run(99)) - Running background 
service : SCMBlockDeletingService
2017-10-20 17:03:56,467 [SCMBlockDeletingService#1] INFO  
block.SCMBlockDeletingService (SCMBlockDeletingService.java:call(129))  - 
Totally added 11 delete blocks command for 1 datanodes, task elapsed time: 6ms
2017-10-20 17:03:57,265 [Datanode State Machine Thread - 0] INFO  
Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
2017-10-20 17:03:57,645 [KeyDeletingService#0] INFO  utils.BackgroundService 
(BackgroundService.java:run(99)) - Running background service : 
KeyDeletingService
2017-10-20 17:03:58,278 [Datanode State Machine Thread - 0] INFO  
Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
2017-10-20 17:03:58,522 [Command processor thread] INFO  
commandhandler.DeleteBlocksCommandHandler 
(DeleteBlocksCommandHandler.

[jira] [Created] (HDFS-12691) Ozone: Decrease interval time of SCMBlockDeletingService for improving the efficiency

2017-10-20 Thread Yiqun Lin (JIRA)
Yiqun Lin created HDFS-12691:


 Summary: Ozone: Decrease interval time of SCMBlockDeletingService 
for improving the efficiency
 Key: HDFS-12691
 URL: https://issues.apache.org/jira/browse/HDFS-12691
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone, performance
Affects Versions: HDFS-7240
Reporter: Yiqun Lin
Assignee: Yiqun Lin


After logging elapsed time of each block deletion task, found some places we 
can make some improvements after testing. The logging during testing:
{noformat}
2017-10-20 17:02:55,168 [Datanode State Machine Thread - 0] INFO  
Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
2017-10-20 17:02:56,169 [Datanode State Machine Thread - 0] INFO  
Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
2017-10-20 17:02:56,451 [SCMBlockDeletingService#0] INFO  
utils.BackgroundService (BackgroundService.java:run(99)) - Running background 
service : SCMBlockDeletingService
2017-10-20 17:02:56,755 [KeyDeletingService#0] INFO  utils.BackgroundService 
(BackgroundService.java:run(99)) - Running background service : 
KeyDeletingService
2017-10-20 17:02:56,758 [KeyDeletingService#1] INFO  ksm.KeyDeletingService 
(KeyDeletingService.java:call(99))  - Found 11 to-delete keys in KSM
2017-10-20 17:02:56,775 [IPC Server handler 19 on 52342] INFO  
scm.StorageContainerManager (StorageContainerManager.java:deleteKeyBlocks(870)) 
 - SCM is informed by KSM to delete 11 blocks
2017-10-20 17:02:57,182 [Datanode State Machine Thread - 0] INFO  
Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
2017-10-20 17:02:57,640 [KeyDeletingService#1] INFO  ksm.KeyDeletingService 
(KeyDeletingService.java:call(125))  - Number of key deleted from KSM DB: 
11, task elapsed time: 885ms
2017-10-20 17:02:58,168 [Datanode State Machine Thread - 0] INFO  
Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
2017-10-20 17:03:03,178 [Datanode State Machine Thread - 0] INFO  
Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
...
2017-10-20 17:03:04,167 [Datanode State Machine Thread - 0] INFO  
Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
2017-10-20 17:03:05,173 [Datanode State Machine Thread - 0] INFO  
Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
2017-10-20 17:03:06,095 [BlockDeletingService#0] INFO  utils.BackgroundService 
(BackgroundService.java:run(99)) - Running background service : 
BlockDeletingService
2017-10-20 17:03:06,095 [BlockDeletingService#0] INFO  
background.BlockDeletingService (BlockDeletingService.java:getTasks(109))  
- Plan to choose 10 containers for block deletion, actually returns 0 valid 
containers.
2017-10-20 17:03:06,171 [Datanode State Machine Thread - 0] INFO  
Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
...
2017-10-20 17:03:54,279 [Datanode State Machine Thread - 0] INFO  
Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
2017-10-20 17:03:55,267 [Datanode State Machine Thread - 0] INFO  
Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
2017-10-20 17:03:56,282 [Datanode State Machine Thread - 0] INFO  
Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
2017-10-20 17:03:56,461 [SCMBlockDeletingService#0] INFO  
utils.BackgroundService (BackgroundService.java:run(99)) - Running background 
service : SCMBlockDeletingService
2017-10-20 17:03:56,467 [SCMBlockDeletingService#1] INFO  
block.SCMBlockDeletingService (SCMBlockDeletingService.java:call(129))  - 
Totally added 11 delete blocks command for 1 datanodes, task elapsed time: 6ms
2017-10-20 17:03:57,265 [Datanode State Machine Thread - 0] INFO  
Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
2017-10-20 17:03:57,645 [KeyDeletingService#0] INFO  utils.BackgroundService 
(BackgroundService.java:run(99)) - Running background service : 
KeyDeletingService
2017-10-20 17:03:58,278 [Datanode State Machine Thread - 0] I

[jira] [Updated] (HDFS-12690) Ozone: generate swagger descriptor for the Ozone REST Api

2017-10-20 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12690:

Attachment: HDFS-12690-HDFS-7240.001.patch

To test: do a full build:

The swagger file is at:
hadoop-hdfs-project/hadoop-hdfs/target/webapps/static/ozone.swagger.json

Or you can find it from the web ui:
For example in KSM ui:
http://localhost:9874/static/ozone.swagger.json

Download swagger file and try to import it to any existing tool (eg. postman).

> Ozone: generate swagger descriptor for the Ozone REST Api
> -
>
> Key: HDFS-12690
> URL: https://issues.apache.org/jira/browse/HDFS-12690
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12690-HDFS-7240.001.patch
>
>
> This patch generates ozone.swagger.json descriptor during compilation time 
> and adds it to the static/ web folder (available both from all web ui).
> Note: I tested multiple method to generate swagger file: runtime and 
> compilation time. Runtime needs more additional dependencies and multiple 
> workaround as we have no ServletContext and ServletConfig in the netty based 
> adapter (but it's possible to add a stub one). I prefer the compilation time 
> generation because it more simplified and it also could be used to generate 
> additional documentation.
> This patch contains only the basic @Api/@ApiMethod annotations the parameters 
> not yet annotated. Followup tasks:
>  *  We can check how can we generate parameter level description from the 
> javadoc. It is possible with custom docket + custom swagger reader, but maybe 
> doesn't worth it.
>  * We can add a swagger ui (there are many implementations). It's a licence 
> nightmare as most of the ui contains unlimited number of npm dependency with 
> different (but mostly Apache + MIT) artifacts. I will suggest to add a 
> swagger ui without the javascript and load it from cdn. It will work only 
> with active internet connection but without licencing issue.
>  * Long term with this plugin we can also generate the content of 
> OzoneRest.md (after a fine tuning the swagger annotations) 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12690) Ozone: generate swagger descriptor for the Ozone REST Api

2017-10-20 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12690:

Status: Patch Available  (was: Open)

> Ozone: generate swagger descriptor for the Ozone REST Api
> -
>
> Key: HDFS-12690
> URL: https://issues.apache.org/jira/browse/HDFS-12690
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12690-HDFS-7240.001.patch
>
>
> This patch generates ozone.swagger.json descriptor during compilation time 
> and adds it to the static/ web folder (available both from all web ui).
> Note: I tested multiple method to generate swagger file: runtime and 
> compilation time. Runtime needs more additional dependencies and multiple 
> workaround as we have no ServletContext and ServletConfig in the netty based 
> adapter (but it's possible to add a stub one). I prefer the compilation time 
> generation because it more simplified and it also could be used to generate 
> additional documentation.
> This patch contains only the basic @Api/@ApiMethod annotations the parameters 
> not yet annotated. Followup tasks:
>  *  We can check how can we generate parameter level description from the 
> javadoc. It is possible with custom docket + custom swagger reader, but maybe 
> doesn't worth it.
>  * We can add a swagger ui (there are many implementations). It's a licence 
> nightmare as most of the ui contains unlimited number of npm dependency with 
> different (but mostly Apache + MIT) artifacts. I will suggest to add a 
> swagger ui without the javascript and load it from cdn. It will work only 
> with active internet connection but without licencing issue.
>  * Long term with this plugin we can also generate the content of 
> OzoneRest.md (after a fine tuning the swagger annotations) 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12690) Ozone: generate swagger descriptor for the Ozone REST Api

2017-10-20 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12690:

Component/s: ozone

> Ozone: generate swagger descriptor for the Ozone REST Api
> -
>
> Key: HDFS-12690
> URL: https://issues.apache.org/jira/browse/HDFS-12690
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>
> This patch generates ozone.swagger.json descriptor during compilation time 
> and adds it to the static/ web folder (available both from all web ui).
> Note: I tested multiple method to generate swagger file: runtime and 
> compilation time. Runtime needs more additional dependencies and multiple 
> workaround as we have no ServletContext and ServletConfig in the netty based 
> adapter (but it's possible to add a stub one). I prefer the compilation time 
> generation because it more simplified and it also could be used to generate 
> additional documentation.
> This patch contains only the basic @Api/@ApiMethod annotations the parameters 
> not yet annotated. Followup tasks:
>  *  We can check how can we generate parameter level description from the 
> javadoc. It is possible with custom docket + custom swagger reader, but maybe 
> doesn't worth it.
>  * We can add a swagger ui (there are many implementations). It's a licence 
> nightmare as most of the ui contains unlimited number of npm dependency with 
> different (but mostly Apache + MIT) artifacts. I will suggest to add a 
> swagger ui without the javascript and load it from cdn. It will work only 
> with active internet connection but without licencing issue.
>  * Long term with this plugin we can also generate the content of 
> OzoneRest.md (after a fine tuning the swagger annotations) 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12690) Ozone: generate swagger descriptor for the Ozone REST Api

2017-10-20 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton reassigned HDFS-12690:
---

Assignee: Elek, Marton

> Ozone: generate swagger descriptor for the Ozone REST Api
> -
>
> Key: HDFS-12690
> URL: https://issues.apache.org/jira/browse/HDFS-12690
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>
> This patch generates ozone.swagger.json descriptor during compilation time 
> and adds it to the static/ web folder (available both from all web ui).
> Note: I tested multiple method to generate swagger file: runtime and 
> compilation time. Runtime needs more additional dependencies and multiple 
> workaround as we have no ServletContext and ServletConfig in the netty based 
> adapter (but it's possible to add a stub one). I prefer the compilation time 
> generation because it more simplified and it also could be used to generate 
> additional documentation.
> This patch contains only the basic @Api/@ApiMethod annotations the parameters 
> not yet annotated. Followup tasks:
>  *  We can check how can we generate parameter level description from the 
> javadoc. It is possible with custom docket + custom swagger reader, but maybe 
> doesn't worth it.
>  * We can add a swagger ui (there are many implementations). It's a licence 
> nightmare as most of the ui contains unlimited number of npm dependency with 
> different (but mostly Apache + MIT) artifacts. I will suggest to add a 
> swagger ui without the javascript and load it from cdn. It will work only 
> with active internet connection but without licencing issue.
>  * Long term with this plugin we can also generate the content of 
> OzoneRest.md (after a fine tuning the swagger annotations) 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12690) Ozone: generate swagger descriptor for the Ozone REST Api

2017-10-20 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12690:

Affects Version/s: HDFS-7240

> Ozone: generate swagger descriptor for the Ozone REST Api
> -
>
> Key: HDFS-12690
> URL: https://issues.apache.org/jira/browse/HDFS-12690
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>
> This patch generates ozone.swagger.json descriptor during compilation time 
> and adds it to the static/ web folder (available both from all web ui).
> Note: I tested multiple method to generate swagger file: runtime and 
> compilation time. Runtime needs more additional dependencies and multiple 
> workaround as we have no ServletContext and ServletConfig in the netty based 
> adapter (but it's possible to add a stub one). I prefer the compilation time 
> generation because it more simplified and it also could be used to generate 
> additional documentation.
> This patch contains only the basic @Api/@ApiMethod annotations the parameters 
> not yet annotated. Followup tasks:
>  *  We can check how can we generate parameter level description from the 
> javadoc. It is possible with custom docket + custom swagger reader, but maybe 
> doesn't worth it.
>  * We can add a swagger ui (there are many implementations). It's a licence 
> nightmare as most of the ui contains unlimited number of npm dependency with 
> different (but mostly Apache + MIT) artifacts. I will suggest to add a 
> swagger ui without the javascript and load it from cdn. It will work only 
> with active internet connection but without licencing issue.
>  * Long term with this plugin we can also generate the content of 
> OzoneRest.md (after a fine tuning the swagger annotations) 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12690) Ozone: generate swagger descriptor for the Ozone REST Api

2017-10-20 Thread Elek, Marton (JIRA)
Elek, Marton created HDFS-12690:
---

 Summary: Ozone: generate swagger descriptor for the Ozone REST Api
 Key: HDFS-12690
 URL: https://issues.apache.org/jira/browse/HDFS-12690
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Elek, Marton


This patch generates ozone.swagger.json descriptor during compilation time and 
adds it to the static/ web folder (available both from all web ui).

Note: I tested multiple method to generate swagger file: runtime and 
compilation time. Runtime needs more additional dependencies and multiple 
workaround as we have no ServletContext and ServletConfig in the netty based 
adapter (but it's possible to add a stub one). I prefer the compilation time 
generation because it more simplified and it also could be used to generate 
additional documentation.

This patch contains only the basic @Api/@ApiMethod annotations the parameters 
not yet annotated. Followup tasks:

 *  We can check how can we generate parameter level description from the 
javadoc. It is possible with custom docket + custom swagger reader, but maybe 
doesn't worth it.
 * We can add a swagger ui (there are many implementations). It's a licence 
nightmare as most of the ui contains unlimited number of npm dependency with 
different (but mostly Apache + MIT) artifacts. I will suggest to add a swagger 
ui without the javascript and load it from cdn. It will work only with active 
internet connection but without licencing issue.
 * Long term with this plugin we can also generate the content of OzoneRest.md 
(after a fine tuning the swagger annotations) 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12665) [AliasMap] Create a version of the AliasMap that runs in memory in the Namenode (leveldb)

2017-10-20 Thread Ewan Higgs (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HDFS-12665:
--
Attachment: HDFS-12665-HDFS-9806.002.patch

Attaching an updated version using {{LevelDB}} (instead of {{LevelDb}} ) and 
making key value serde public static functions so they can be used by 
HDFS-12591.

> [AliasMap] Create a version of the AliasMap that runs in memory in the 
> Namenode (leveldb)
> -
>
> Key: HDFS-12665
> URL: https://issues.apache.org/jira/browse/HDFS-12665
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
> Attachments: HDFS-12665-HDFS-9806.001.patch, 
> HDFS-12665-HDFS-9806.002.patch
>
>
> The design of Provided Storage requires the use of an AliasMap to manage the 
> mapping between blocks of files on the local HDFS and ranges of files on a 
> remote storage system. To reduce load from the Namenode, this can be done 
> using a pluggable external service (e.g. AzureTable, Cassandra, Ratis). 
> However, to aide adoption and ease of deployment, we propose an in memory 
> version.
> This AliasMap will be a wrapper around LevelDB (already a dependency from the 
> Timeline Service) and use protobuf for the key (blockpool, blockid, and 
> genstamp) and the value (url, offset, length, nonce). The in memory service 
> will also have a configurable port on which it will listen for updates from 
> Storage Policy Satisfier (SPS) Coordinating Datanodes (C-DN).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7060) Avoid taking locks when sending heartbeats from the DataNode

2017-10-20 Thread Jiandan Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16212333#comment-16212333
 ] 

Jiandan Yang  commented on HDFS-7060:
-

[~xinwei] [~brahmareddy] [~jojochuang]  We encountered the same 
problem(branch-2.8.2), BPServiceActor#offerService blocked because 
sendHeartBeat waited for FSDataset lock, and blockReceivedAndDeleted was delay 
about 60s, and eventually  client can not close file and threw Exception 
"Unable to close file because the last blockxxx does not have enough number of 
replicas”

I think HDFS-7060 can solve our problem very well. Does this patch have any 
problem? Why does it merge into trunk.

> Avoid taking locks when sending heartbeats from the DataNode
> 
>
> Key: HDFS-7060
> URL: https://issues.apache.org/jira/browse/HDFS-7060
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Xinwei Qin 
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7060-002.patch, HDFS-7060.000.patch, 
> HDFS-7060.001.patch
>
>
> We're seeing the heartbeat is blocked by the monitor of {{FsDatasetImpl}} 
> when the DN is under heavy load of writes:
> {noformat}
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getDfsUsed(FsVolumeImpl.java:115)
> - waiting to lock <0x000780304fb8> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getStorageReports(FsDatasetImpl.java:91)
> - locked <0x000780612fd8> (a java.lang.Object)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:563)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:668)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:827)
> at java.lang.Thread.run(Thread.java:744)
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:743)
> - waiting to lock <0x000780304fb8> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:60)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:169)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:621)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
> at java.lang.Thread.run(Thread.java:744)
>java.lang.Thread.State: RUNNABLE
> at java.io.UnixFileSystem.createFileExclusively(Native Method)
> at java.io.File.createNewFile(File.java:1006)
> at 
> org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createTmpFile(DatanodeUtil.java:59)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:244)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:195)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:753)
> - locked <0x000780304fb8> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:60)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:169)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:621)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
> at java.lang.Thread.run(Thread.java:744)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9810) Allow support for more than one block replica per datanode

2017-10-20 Thread Ewan Higgs (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16212300#comment-16212300
 ] 

Ewan Higgs commented on HDFS-9810:
--

This should be fixed by HDFS-12685.

> Allow support for more than one block replica per datanode
> --
>
> Key: HDFS-9810
> URL: https://issues.apache.org/jira/browse/HDFS-9810
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: datanode
>Reporter: Virajith Jalaparti
>
> Datanodes report and store only one replica of each block. It should be 
> possible to store multiple replicas among its different configured storage 
> types, particularly to support non-durable media and remote storage.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org