[jira] [Updated] (HDFS-13245) RBF: State store DBMS implementation

2018-05-04 Thread Yiran Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiran Wu updated HDFS-13245:

Status: Patch Available  (was: Open)

> RBF: State store DBMS implementation
> 
>
> Key: HDFS-13245
> URL: https://issues.apache.org/jira/browse/HDFS-13245
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: maobaolong
>Assignee: Yiran Wu
>Priority: Major
> Attachments: HDFS-13245.001.patch, HDFS-13245.002.patch, 
> HDFS-13245.003.patch, HDFS-13245.004.patch, HDFS-13245.005.patch, 
> HDFS-13245.006.patch, HDFS-13245.007.patch, HDFS-13245.008.patch, 
> HDFS-13245.009.patch
>
>
> Add a DBMS implementation for the State Store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13245) RBF: State store DBMS implementation

2018-05-04 Thread Yiran Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiran Wu updated HDFS-13245:

Status: Open  (was: Patch Available)

> RBF: State store DBMS implementation
> 
>
> Key: HDFS-13245
> URL: https://issues.apache.org/jira/browse/HDFS-13245
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: maobaolong
>Assignee: Yiran Wu
>Priority: Major
> Attachments: HDFS-13245.001.patch, HDFS-13245.002.patch, 
> HDFS-13245.003.patch, HDFS-13245.004.patch, HDFS-13245.005.patch, 
> HDFS-13245.006.patch, HDFS-13245.007.patch, HDFS-13245.008.patch, 
> HDFS-13245.009.patch
>
>
> Add a DBMS implementation for the State Store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13245) RBF: State store DBMS implementation

2018-05-04 Thread Yiran Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiran Wu updated HDFS-13245:

Attachment: HDFS-13245.009.patch

> RBF: State store DBMS implementation
> 
>
> Key: HDFS-13245
> URL: https://issues.apache.org/jira/browse/HDFS-13245
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: maobaolong
>Assignee: Yiran Wu
>Priority: Major
> Attachments: HDFS-13245.001.patch, HDFS-13245.002.patch, 
> HDFS-13245.003.patch, HDFS-13245.004.patch, HDFS-13245.005.patch, 
> HDFS-13245.006.patch, HDFS-13245.007.patch, HDFS-13245.008.patch, 
> HDFS-13245.009.patch
>
>
> Add a DBMS implementation for the State Store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-5926) Documentation should clarify dfs.datanode.du.reserved impact from reserved disk capacity

2018-05-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464526#comment-16464526
 ] 

Hudson commented on HDFS-5926:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14129 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14129/])
HDFS-5926 Documentation should clarify dfs.datanode.du.reserved impact (fabbri: 
rev a732acd8730277df4d9b97b97101bc2bc768800f)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


> Documentation should clarify dfs.datanode.du.reserved impact from reserved 
> disk capacity
> 
>
> Key: HDFS-5926
> URL: https://issues.apache.org/jira/browse/HDFS-5926
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 0.20.2
>Reporter: Alexander Fahlke
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
> Fix For: 3.2.0
>
> Attachments: HDFS-5926-1.patch
>
>
> I'm using hadoop-0.20.2 on Debian Squeeze and ran into the same confusion as 
> many others with the parameter for dfs.datanode.du.reserved. One day some 
> data nodes got out of disk errors although there was space left on the disks.
> The following values are rounded to make the problem more clear:
> - the disk for the DFS data has 1000GB and only one Partition (ext3) for DFS 
> data
> - you plan to set the dfs.datanode.du.reserved to 20GB
> - the reserved reserved-blocks-percentage by tune2fs is 5% (the default)
> That gives all users, except root, 5% less capacity that they can use.
> Although the System reports the total of 1000GB as usable for all users via 
> df. The hadoop-deamons are not running as root.
> If i read it right, than hadoop get's the free capacity via df.
>  
> Starting in 
> {{/src/hdfs/org/apache/hadoop/hdfs/server/datanode/FSDataset.java}} on line 
> 350: {{return usage.getCapacity()-reserved;}}
> going to {{/src/core/org/apache/hadoop/fs/DF.java}} which says:
> {{"Filesystem disk space usage statistics. Uses the unix 'df' program"}}
> When you have 5% reserved by tune2fs (in our case 50GB) and you give 
> dfs.datanode.du.reserved only 20GB, than you can possibly ran into out of 
> disk errors that hadoop can't handle.
> In this case you must add the planned 20GB du reserved to the reserved 
> capacity by tune2fs. This results in (at least) 70GB for 
> dfs.datanode.du.reserved in my case.
> Two ideas:
> # The documentation must be clear at this point to avoid this problem.
> # Hadoop could check for reserved space by tune2fs (or other tools) and add 
> this value to the dfs.datanode.du.reserved parameter.
> This ticket is a follow up from the Mailinglist: 
> https://mail-archives.apache.org/mod_mbox/hadoop-common-user/201312.mbox/%3CCAHodO=Kbv=13T=2otz+s8nsodbs1icnzqyxt_0wdfxy5gks...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-5926) Documentation should clarify dfs.datanode.du.reserved impact from reserved disk capacity

2018-05-04 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HDFS-5926:
---
   Resolution: Fixed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

Committed to trunk after a minor tweak (than/then and clarify 
reserved-blocks-percentage). Thanks for the contribution [~gabor.bota]

> Documentation should clarify dfs.datanode.du.reserved impact from reserved 
> disk capacity
> 
>
> Key: HDFS-5926
> URL: https://issues.apache.org/jira/browse/HDFS-5926
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 0.20.2
>Reporter: Alexander Fahlke
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
> Fix For: 3.2.0
>
> Attachments: HDFS-5926-1.patch
>
>
> I'm using hadoop-0.20.2 on Debian Squeeze and ran into the same confusion as 
> many others with the parameter for dfs.datanode.du.reserved. One day some 
> data nodes got out of disk errors although there was space left on the disks.
> The following values are rounded to make the problem more clear:
> - the disk for the DFS data has 1000GB and only one Partition (ext3) for DFS 
> data
> - you plan to set the dfs.datanode.du.reserved to 20GB
> - the reserved reserved-blocks-percentage by tune2fs is 5% (the default)
> That gives all users, except root, 5% less capacity that they can use.
> Although the System reports the total of 1000GB as usable for all users via 
> df. The hadoop-deamons are not running as root.
> If i read it right, than hadoop get's the free capacity via df.
>  
> Starting in 
> {{/src/hdfs/org/apache/hadoop/hdfs/server/datanode/FSDataset.java}} on line 
> 350: {{return usage.getCapacity()-reserved;}}
> going to {{/src/core/org/apache/hadoop/fs/DF.java}} which says:
> {{"Filesystem disk space usage statistics. Uses the unix 'df' program"}}
> When you have 5% reserved by tune2fs (in our case 50GB) and you give 
> dfs.datanode.du.reserved only 20GB, than you can possibly ran into out of 
> disk errors that hadoop can't handle.
> In this case you must add the planned 20GB du reserved to the reserved 
> capacity by tune2fs. This results in (at least) 70GB for 
> dfs.datanode.du.reserved in my case.
> Two ideas:
> # The documentation must be clear at this point to avoid this problem.
> # Hadoop could check for reserved space by tune2fs (or other tools) and add 
> this value to the dfs.datanode.du.reserved parameter.
> This ticket is a follow up from the Mailinglist: 
> https://mail-archives.apache.org/mod_mbox/hadoop-common-user/201312.mbox/%3CCAHodO=Kbv=13T=2otz+s8nsodbs1icnzqyxt_0wdfxy5gks...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-17) Add node to container map class to simplify state in SCM

2018-05-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-17?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464400#comment-16464400
 ] 

genericqa commented on HDDS-17:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
45s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 31m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
56s{color} | {color:red} hadoop-hdds/common in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
41s{color} | {color:red} hadoop-hdds/server-scm in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
41s{color} | {color:red} hadoop-ozone/tools in trunk has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
49s{color} | {color:red} hadoop-hdds/server-scm generated 3 new + 1 unchanged - 
0 fixed = 4 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
5s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
17s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} tools in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}137m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdds/server-scm |
|  |  Synchronization performed on java.util.concurrent.ConcurrentHashMap in 
org.apache.hadoop.hdds.scm.node.states.Node2ContainerMap.getContainers(UUID)  
At 
Node2ContainerMap.java:org.apache.hadoop.hdds.scm.node.states.Node2ContainerMap.getContainers

[jira] [Commented] (HDFS-13286) Add haadmin commands to transition between standby and observer

2018-05-04 Thread Chao Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464312#comment-16464312
 ] 

Chao Sun commented on HDFS-13286:
-

Thanks [~xkrogen] for all the help!

> Add haadmin commands to transition between standby and observer
> ---
>
> Key: HDFS-13286
> URL: https://issues.apache.org/jira/browse/HDFS-13286
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, hdfs, namenode
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Fix For: HDFS-12943
>
> Attachments: HDFS-13286-HDFS-12943.000.patch, 
> HDFS-13286-HDFS-12943.001.patch, HDFS-13286-HDFS-12943.002.patch, 
> HDFS-13286-HDFS-12943.003.patch, HDFS-13286-HDFS-12943.004.patch, 
> HDFS-13286-HDFS-12943.005.patch
>
>
> As discussed in HDFS-12975, we should allow explicit transition between 
> standby and observer through haadmin command, such as:
> {code}
> haadmin -transitionToObserver
> {code}
> Initially we should support transition from observer to standby, and standby 
> to observer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13286) Add haadmin commands to transition between standby and observer

2018-05-04 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-13286:
---
Component/s: namenode
 hdfs
 ha

> Add haadmin commands to transition between standby and observer
> ---
>
> Key: HDFS-13286
> URL: https://issues.apache.org/jira/browse/HDFS-13286
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, hdfs, namenode
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Fix For: HDFS-12943
>
> Attachments: HDFS-13286-HDFS-12943.000.patch, 
> HDFS-13286-HDFS-12943.001.patch, HDFS-13286-HDFS-12943.002.patch, 
> HDFS-13286-HDFS-12943.003.patch, HDFS-13286-HDFS-12943.004.patch, 
> HDFS-13286-HDFS-12943.005.patch
>
>
> As discussed in HDFS-12975, we should allow explicit transition between 
> standby and observer through haadmin command, such as:
> {code}
> haadmin -transitionToObserver
> {code}
> Initially we should support transition from observer to standby, and standby 
> to observer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13286) Add haadmin commands to transition between standby and observer

2018-05-04 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-13286:
---
   Resolution: Fixed
Fix Version/s: HDFS-12943
   Status: Resolved  (was: Patch Available)

> Add haadmin commands to transition between standby and observer
> ---
>
> Key: HDFS-13286
> URL: https://issues.apache.org/jira/browse/HDFS-13286
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Fix For: HDFS-12943
>
> Attachments: HDFS-13286-HDFS-12943.000.patch, 
> HDFS-13286-HDFS-12943.001.patch, HDFS-13286-HDFS-12943.002.patch, 
> HDFS-13286-HDFS-12943.003.patch, HDFS-13286-HDFS-12943.004.patch, 
> HDFS-13286-HDFS-12943.005.patch
>
>
> As discussed in HDFS-12975, we should allow explicit transition between 
> standby and observer through haadmin command, such as:
> {code}
> haadmin -transitionToObserver
> {code}
> Initially we should support transition from observer to standby, and standby 
> to observer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13286) Add haadmin commands to transition between standby and observer

2018-05-04 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464300#comment-16464300
 ] 

Erik Krogen commented on HDFS-13286:


Great! I agree with you on the RM side of things.

Your precommit is available 
[here|https://builds.apache.org/job/PreCommit-HDFS-Build/24130]. It timed out 
(5 hours) since it had changes in YARN, Common, and HDFS. Common/HDFS tests 
were able to complete; YARN was killed. Given the tiny nature of the YARN 
changes I am not worried about this.

These are the test failures reported:
{quote}
 
org.apache.hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy.testStripedFile1
 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations.testClusterIdMismatchAtStartupWithHA
 
org.apache.hadoop.hdfs.server.namenode.TestReencryptionWithKMS.testReencryptionKMSDown
 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure.testUnderReplicationAfterVolFailure
 
org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline
{quote}
Which do not look related to this change.

The diff reports are clean except for [checkstyle|
https://builds.apache.org/job/PreCommit-HDFS-Build/24130/artifact/out/diff-checkstyle-root.txt]:
{quote}
./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/DummyHAService.java:59:
  boolean failToBecomeActive, failToBecomeStandby, failToBecomeObserver,:11: 
Variable 'failToBecomeActive' must be private and have accessor methods. 
[VisibilityModifier]
./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/DummyHAService.java:59:
  boolean failToBecomeActive, failToBecomeStandby, failToBecomeObserver,:31: 
Variable 'failToBecomeStandby' must be private and have accessor methods. 
[VisibilityModifier]
./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/DummyHAService.java:59:
  boolean failToBecomeActive, failToBecomeStandby, failToBecomeObserver,:52: 
Variable 'failToBecomeObserver' must be private and have accessor methods. 
[VisibilityModifier]
./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/DummyHAService.java:60:
  failToFence;:7: Variable 'failToFence' must be private and have accessor 
methods. [VisibilityModifier]
{quote}
These are primarily existing issues and I think it's okay to follow the 
convention here.

I am going to commit this now.

> Add haadmin commands to transition between standby and observer
> ---
>
> Key: HDFS-13286
> URL: https://issues.apache.org/jira/browse/HDFS-13286
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13286-HDFS-12943.000.patch, 
> HDFS-13286-HDFS-12943.001.patch, HDFS-13286-HDFS-12943.002.patch, 
> HDFS-13286-HDFS-12943.003.patch, HDFS-13286-HDFS-12943.004.patch, 
> HDFS-13286-HDFS-12943.005.patch
>
>
> As discussed in HDFS-12975, we should allow explicit transition between 
> standby and observer through haadmin command, such as:
> {code}
> haadmin -transitionToObserver
> {code}
> Initially we should support transition from observer to standby, and standby 
> to observer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13399) Make Client field AlignmentContext non-static.

2018-05-04 Thread Plamen Jeliazkov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464279#comment-16464279
 ] 

Plamen Jeliazkov commented on HDFS-13399:
-

Thank you everyone for your input offline. I think I see now.

I believe we need to move the following block out of {{Server.setupResponse()}} 
somehow into {{Responder.sendResponse()}} or even 
{{Server.RpcCall.doResponse}}: 
{code:java}
if(alignmentContext != null) {
  alignmentContext.updateResponseState(headerBuilder);
}
{code}

The challenge will be that the {{RpcResponseHeaderProto}} is already 
constructed by this time.


> Make Client field AlignmentContext non-static.
> --
>
> Key: HDFS-13399
> URL: https://issues.apache.org/jira/browse/HDFS-13399
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-12943
>Reporter: Plamen Jeliazkov
>Assignee: Plamen Jeliazkov
>Priority: Major
> Attachments: HDFS-13399-HDFS-12943.000.patch, 
> HDFS-13399-HDFS-12943.001.patch, HDFS-13399-HDFS-12943.002.patch, 
> HDFS-13399-HDFS-12943.003.patch, HDFS-13399-HDFS-12943.004.patch, 
> HDFS-13399-HDFS-12943.005.patch, HDFS-13399-HDFS-12943.006.patch
>
>
> In HDFS-12977, DFSClient's constructor was altered to make use of a new 
> static method in Client that allowed one to set an AlignmentContext. This 
> work is to remove that static field and make each DFSClient pass it's 
> AlignmentContext down to the proxy Call level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-20) Ozone: Add support for rename key within a bucket for rpc client

2018-05-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-20?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464259#comment-16464259
 ] 

genericqa commented on HDDS-20:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
32s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 31m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
55s{color} | {color:red} hadoop-ozone/common in trunk has 2 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
41s{color} | {color:red} hadoop-ozone/ozone-manager in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 28m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 23s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
38s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
36s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} objectstore-service in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 56s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{colo

[jira] [Commented] (HDDS-23) Remove SCMNodeAddressList from SCMRegisterRequestProto

2018-05-04 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-23?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464247#comment-16464247
 ] 

Anu Engineer commented on HDDS-23:
--

This was added with the anticipation that we will use this when SCM HA is 
enabled.

So I am +! on removing this for now and bring it back later. Just wanted to 
make sure that we are on the same page.

> Remove SCMNodeAddressList from SCMRegisterRequestProto
> --
>
> Key: HDDS-23
> URL: https://issues.apache.org/jira/browse/HDDS-23
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode, SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-23.000.patch
>
>
> {{SCMNodeAddressList}} in {{SCMRegisterRequestProto}} is not used by SCM and 
> it's not necessary to send it in register call of datanode. 
> {{SCMNodeAddressList}} can be removed from {{SCMRegisterRequestProto}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-23) Remove SCMNodeAddressList from SCMRegisterRequestProto

2018-05-04 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-23?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464247#comment-16464247
 ] 

Anu Engineer edited comment on HDDS-23 at 5/4/18 6:32 PM:
--

This was added with the anticipation that we will use this when SCM HA is 
enabled.

So I am +1 on removing this for now and bring it back later. Just wanted to 
make sure that we are on the same page.


was (Author: anu):
This was added with the anticipation that we will use this when SCM HA is 
enabled.

So I am +! on removing this for now and bring it back later. Just wanted to 
make sure that we are on the same page.

> Remove SCMNodeAddressList from SCMRegisterRequestProto
> --
>
> Key: HDDS-23
> URL: https://issues.apache.org/jira/browse/HDDS-23
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode, SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-23.000.patch
>
>
> {{SCMNodeAddressList}} in {{SCMRegisterRequestProto}} is not used by SCM and 
> it's not necessary to send it in register call of datanode. 
> {{SCMNodeAddressList}} can be removed from {{SCMRegisterRequestProto}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-17) Add node to container map class to simplify state in SCM

2018-05-04 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-17?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464244#comment-16464244
 ] 

Anu Engineer commented on HDDS-17:
--

[~elek] and [~nandakumar131] Thanks for the comments. I have uploaded patch v2 
that takes care of all the comments.

 bq. Instead of Long we can use ContainerID
 Fixed, Good catch.

 bq. In Node2ContainerMap#updateDatanodeMap can we throw exception if the node 
is not already present.
 Fixed.

Some more changes in this version of the patch.

# Made the class thread safe.
# Added a function to get the container list given a datanode.

> Add node to container map class to simplify state in SCM
> 
>
> Key: HDDS-17
> URL: https://issues.apache.org/jira/browse/HDDS-17
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-17.001.patch, HDDS-17.002.patch
>
>
> Current SCM state map is maintained in nodeStateManager. This first of 
> several refactoring to make it independent and small classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-17) Add node to container map class to simplify state in SCM

2018-05-04 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-17?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-17:
-
Attachment: HDDS-17.002.patch

> Add node to container map class to simplify state in SCM
> 
>
> Key: HDDS-17
> URL: https://issues.apache.org/jira/browse/HDDS-17
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-17.001.patch, HDDS-17.002.patch
>
>
> Current SCM state map is maintained in nodeStateManager. This first of 
> several refactoring to make it independent and small classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13522) Support observer node from Router-Based Federation

2018-05-04 Thread Chao Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464214#comment-16464214
 ] 

Chao Sun commented on HDFS-13522:
-

Sounds good [~elgoiri]. I'll finish 
[HDFS-12976|https://issues.apache.org/jira/browse/HDFS-12976] first.

> Support observer node from Router-Based Federation
> --
>
> Key: HDFS-13522
> URL: https://issues.apache.org/jira/browse/HDFS-13522
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, namenode
>Reporter: Erik Krogen
>Priority: Major
>
> Changes will need to occur to the router to support the new observer node.
> One such change will be to make the router understand the observer state, 
> e.g. {{FederationNamenodeServiceState}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-18) Ozone: Ozone Shell should use RestClient and RpcClient

2018-05-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-18?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464215#comment-16464215
 ] 

genericqa commented on HDDS-18:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test hadoop-ozone/acceptance-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
59s{color} | {color:red} hadoop-hdds/common in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
42s{color} | {color:red} hadoop-ozone/ozone-manager in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test hadoop-ozone/acceptance-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
15s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
40s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 22m  7s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
18s{color} | {color:green} hadoop-ozone in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
19s{color} | {color:gree

[jira] [Commented] (HDFS-13245) RBF: State store DBMS implementation

2018-05-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464200#comment-16464200
 ] 

genericqa commented on HDFS-13245:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 31m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 33m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-assemblies {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 29m  
7s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m  9s{color} | {color:orange} root: The patch generated 5 new + 0 unchanged - 
0 fixed = 5 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-assemblies {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
24s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-assemblies in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 
37s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}157m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13245

[jira] [Commented] (HDFS-12981) HDFS renameSnapshot to Itself for Non Existent snapshot should throw error

2018-05-04 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464197#comment-16464197
 ] 

Xiao Chen commented on HDFS-12981:
--

Thanks [~saileshpatel] for creating the jira with good details, and [~knanasi] 
for working on this!

The patch looks pretty good to me. Could you please fix the last checkstyles 
reported by jenkins? We don't change existing stuff unnecessarily, but try not 
to introduce new ones.

There is also a unnecessary space change after 
{{TestSnapshotRename#testRenameToExistingSnapshot}}, let's drop that too.

> HDFS  renameSnapshot to Itself for Non Existent snapshot should throw error
> ---
>
> Key: HDFS-12981
> URL: https://issues.apache.org/jira/browse/HDFS-12981
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 2.6.0
>Reporter: Sailesh Patel
>Assignee: Kitti Nanasi
>Priority: Minor
> Attachments: HDFS-12981-branch-2.6.0.001.patch, 
> HDFS-12981-branch-2.6.0.002.patch, HDFS-12981.001.patch, 
> HDFS-12981.002.patch, HDFS-12981.003.patch
>
>
> When trying to rename a non-existent HDFS  snapshot to ITSELF, there are no 
> errors and exits with a success code.
> The steps to reproduce this issue is:
> hdfs dfs -mkdir /tmp/dir1
> hdfs dfsadmin -allowSnapshot /tmp/dir1
> hdfs dfs  -createSnapshot /tmp/dir1  snap1_dir
> Rename from non-existent to another_non-existent : errors and return code 1.  
> This is correct.
>   hdfs dfs -renameSnapshot /tmp/dir1 nonexist another_nonexist  : 
>   echo $?
>
>   renameSnapshot: The snapshot nonexist does not exist for directory /tmp/dir1
> Rename from non-existent to non-existent : no errors and return code 0  
> instead of Error and return code 1.
>   hdfs dfs -renameSnapshot /tmp/dir1 nonexist nonexist  ;  echo $?
> Current behavior:   No error and return code 0.
> Expected behavior:  An error returned and return code 1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7527) TestDecommission.testIncludeByRegistrationName fails occassionally in trunk

2018-05-04 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464177#comment-16464177
 ] 

Wei-Chiu Chuang commented on HDFS-7527:
---

Hmm someone this test always time out before and after the patch.
How come it passed Hadoop precommit?

> TestDecommission.testIncludeByRegistrationName fails occassionally in trunk
> ---
>
> Key: HDFS-7527
> URL: https://issues.apache.org/jira/browse/HDFS-7527
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, test
>Reporter: Yongjun Zhang
>Assignee: Binglin Chang
>Priority: Major
>  Labels: flaky-test
> Attachments: HDFS-7527.001.patch, HDFS-7527.002.patch, 
> HDFS-7527.003.patch
>
>
> https://builds.apache.org/job/Hadoop-Hdfs-trunk/1974/testReport/
> {quote}
> Error Message
> test timed out after 36 milliseconds
> Stacktrace
> java.lang.Exception: test timed out after 36 milliseconds
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName(TestDecommission.java:957)
> 2014-12-15 12:00:19,958 ERROR datanode.DataNode 
> (BPServiceActor.java:run(836)) - Initialization failed for Block pool 
> BP-887397778-67.195.81.153-1418644469024 (Datanode Uuid null) service to 
> localhost/127.0.0.1:40565 Datanode denied communication with namenode because 
> the host is not in the include-list: DatanodeRegistration(127.0.0.1, 
> datanodeUuid=55d8cbff-d8a3-4d6d-ab64-317fff0ee279, infoPort=54318, 
> infoSecurePort=0, ipcPort=43726, 
> storageInfo=lv=-56;cid=testClusterID;nsid=903754315;c=0)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:915)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:4402)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:1196)
>   at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:92)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:26296)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:637)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:966)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2127)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2123)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2121)
> 2014-12-15 12:00:29,087 FATAL datanode.DataNode 
> (BPServiceActor.java:run(841)) - Initialization failed for Block pool 
> BP-887397778-67.195.81.153-1418644469024 (Datanode Uuid null) service to 
> localhost/127.0.0.1:40565. Exiting. 
> java.io.IOException: DN shut down before block pool connected
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.retrieveNamespaceInfo(BPServiceActor.java:186)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:216)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:829)
>   at java.lang.Thread.run(Thread.java:745)
> {quote}
> Found by tool proposed in HADOOP-11045:
> {quote}
> [yzhang@localhost jenkinsftf]$ ./determine-flaky-tests-hadoop.py -j 
> Hadoop-Hdfs-trunk -n 5 | tee bt.log
> Recently FAILED builds in url: 
> https://builds.apache.org//job/Hadoop-Hdfs-trunk
> THERE ARE 4 builds (out of 6) that have failed tests in the past 5 days, 
> as listed below:
> ===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1974/testReport 
> (2014-12-15 03:30:01)
> Failed test: 
> org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName
> Failed test: 
> org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline
> ===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1972/testReport 
> (2014-12-13 10:32:27)
> Failed test: 
> org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName
> ===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1971/testReport 
> (2014-12-13 03:30:01)
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline
> ===>https://builds.apache.org/job/Hadoop-Hdfs-tru

[jira] [Comment Edited] (HDFS-13399) Make Client field AlignmentContext non-static.

2018-05-04 Thread Plamen Jeliazkov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464160#comment-16464160
 ] 

Plamen Jeliazkov edited comment on HDFS-13399 at 5/4/18 5:21 PM:
-

Yes I propose to remove it from {{DFSClient}}. I think for now I will create a 
new {{ProxyProvider}} that is only used in tests and makes use of 
{{AlignmentContext}}. I will be able to pull it because I'll have access to the 
instance within my tests. Similar to how others created their own {{RpcEngine}} 
implementations within unit tests. This should be enough to showcase the 
stateId transfer. We can remove my class if we want after we have the 
{{StandbyReadsProxyProvider}} working.

Regarding the issue about the transactionId – I want to clear up below what I 
am talking about:

Imagine a fresh HA-enabled DFS, at transactionId 0, is initialized. A client 
connects and makes a single directory. We should now expect to be at 
transactionId 1 and expect that the client received, in the RPC response 
header, a stateId of 1. However this is not the case. The reason it is not the 
case is because HA-enabled NameNodes utilize {{FSEditLogAsync}} which updates 
the txid field, the field we rely on in 
{{FSNamesystem.getLastWrittenTransactionId}}, asynchronously from the client 
call.  The result is that in the RPC response header the client receives a 
stateId of 0. Not 1. This is clearly incorrect. We do not want a client to 
connect to a NameNode that is behind in state.

Clearly this is just a race condition but it has already appeared in my unit 
tests.

One idea is to modify {{FSEditLogAsync}} like so:
{code:java}
  @Override
  long getLastWrittenTxIdWithoutLock() {
return super.getLastWrittenTxIdWithoutLock() + editPendingQ.size() + 
syncWaitQ.size();
  }
{code}
 However I am unsure if this would be correct / safe to do. Input from others 
would be desired here.


was (Author: zero45):
Yes I propose to remove it from {{DFSClient}}. I think for now I will create a 
new {{ProxyProvider}} that is only used in tests and makes use of 
{{AlignmentContext}}. I will be able to pull it because I'll have access to the 
instance within my tests. Similar to how others created their own {{RpcEngine}} 
implementations within unit tests. This should be enough to showcase the 
stateId transfer. We can remove my class if we want after we have the 
{{StandbyReadsProxyProvider}} working.

Regarding the issue about the transactionId – I want to clear up below what I 
am talking about:

Imagine a fresh HA-enabled DFS, at transactionId 0, is initialized. A client 
connects and makes a single directory. We should now expect to be at 
transactionId 1 and expect that the client received, in the RPC response 
header, a stateId of 1. However this is not the case. The reason it is not the 
case is because HA-enabled NameNodes utilize {{FSEditLogAsync}} which updates 
the txid field, the field we rely on in 
{{FSNamesystem.getLastWrittenTransactionId}}, asynchronously from the client 
call.  The result is that in the RPC response header the client receives they 
get a stateId of 0. Not 1. This is clearly incorrect. We do not want a client 
to connect to a NameNode that is behind in state.

Clearly this is just a race condition but it has already appeared in my unit 
tests.

One idea is to modify {{FSEditLogAsync}} like so:
{code:java}
  @Override
  long getLastWrittenTxIdWithoutLock() {
return super.getLastWrittenTxIdWithoutLock() + editPendingQ.size() + 
syncWaitQ.size();
  }
{code}
 However I am unsure if this would be correct / safe to do. Input from others 
would be desired here.

> Make Client field AlignmentContext non-static.
> --
>
> Key: HDFS-13399
> URL: https://issues.apache.org/jira/browse/HDFS-13399
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-12943
>Reporter: Plamen Jeliazkov
>Assignee: Plamen Jeliazkov
>Priority: Major
> Attachments: HDFS-13399-HDFS-12943.000.patch, 
> HDFS-13399-HDFS-12943.001.patch, HDFS-13399-HDFS-12943.002.patch, 
> HDFS-13399-HDFS-12943.003.patch, HDFS-13399-HDFS-12943.004.patch, 
> HDFS-13399-HDFS-12943.005.patch, HDFS-13399-HDFS-12943.006.patch
>
>
> In HDFS-12977, DFSClient's constructor was altered to make use of a new 
> static method in Client that allowed one to set an AlignmentContext. This 
> work is to remove that static field and make each DFSClient pass it's 
> AlignmentContext down to the proxy Call level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13399) Make Client field AlignmentContext non-static.

2018-05-04 Thread Plamen Jeliazkov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464160#comment-16464160
 ] 

Plamen Jeliazkov commented on HDFS-13399:
-

Yes I propose to remove it from {{DFSClient}}. I think for now I will create a 
new {{ProxyProvider}} that is only used in tests and makes use of 
{{AlignmentContext}}. I will be able to pull it because I'll have access to the 
instance within my tests. Similar to how others created their own {{RpcEngine}} 
implementations within unit tests. This should be enough to showcase the 
stateId transfer. We can remove my class if we want after we have the 
{{StandbyReadsProxyProvider}} working.

Regarding the issue about the transactionId – I want to clear up below what I 
am talking about:

Imagine a fresh HA-enabled DFS, at transactionId 0, is initialized. A client 
connects and makes a single directory. We should now expect to be at 
transactionId 1 and expect that the client received, in the RPC response 
header, a stateId of 1. However this is not the case. The reason it is not the 
case is because HA-enabled NameNodes utilize {{FSEditLogAsync}} which updates 
the txid field, the field we rely on in 
{{FSNamesystem.getLastWrittenTransactionId}}, asynchronously from the client 
call.  The result is that in the RPC response header the client receives they 
get a stateId of 0. Not 1. This is clearly incorrect. We do not want a client 
to connect to a NameNode that is behind in state.

Clearly this is just a race condition but it has already appeared in my unit 
tests.

One idea is to modify {{FSEditLogAsync}} like so:
{code:java}
  @Override
  long getLastWrittenTxIdWithoutLock() {
return super.getLastWrittenTxIdWithoutLock() + editPendingQ.size() + 
syncWaitQ.size();
  }
{code}
 However I am unsure if this would be correct / safe to do. Input from others 
would be desired here.

> Make Client field AlignmentContext non-static.
> --
>
> Key: HDFS-13399
> URL: https://issues.apache.org/jira/browse/HDFS-13399
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-12943
>Reporter: Plamen Jeliazkov
>Assignee: Plamen Jeliazkov
>Priority: Major
> Attachments: HDFS-13399-HDFS-12943.000.patch, 
> HDFS-13399-HDFS-12943.001.patch, HDFS-13399-HDFS-12943.002.patch, 
> HDFS-13399-HDFS-12943.003.patch, HDFS-13399-HDFS-12943.004.patch, 
> HDFS-13399-HDFS-12943.005.patch, HDFS-13399-HDFS-12943.006.patch
>
>
> In HDFS-12977, DFSClient's constructor was altered to make use of a new 
> static method in Client that allowed one to set an AlignmentContext. This 
> work is to remove that static field and make each DFSClient pass it's 
> AlignmentContext down to the proxy Call level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-21) Ozone: Add support for rename key within a bucket for rest client

2018-05-04 Thread Lokesh Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-21?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464119#comment-16464119
 ] 

Lokesh Jain commented on HDDS-21:
-

HDDS-21.001.patch can be submitted after HDDS-20.

> Ozone: Add support for rename key within a bucket for rest client
> -
>
> Key: HDDS-21
> URL: https://issues.apache.org/jira/browse/HDDS-21
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-21.001.patch, HDFS-13229-HDFS-7240.001.patch
>
>
> This jira aims to add support for rename key within a bucket for rest client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-21) Ozone: Add support for rename key within a bucket for rest client

2018-05-04 Thread Lokesh Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-21?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-21:

Attachment: HDDS-21.001.patch

> Ozone: Add support for rename key within a bucket for rest client
> -
>
> Key: HDDS-21
> URL: https://issues.apache.org/jira/browse/HDDS-21
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-21.001.patch, HDFS-13229-HDFS-7240.001.patch
>
>
> This jira aims to add support for rename key within a bucket for rest client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12666) [PROVIDED Phase 2] Provided Storage Mount Manager (PSMM) mount

2018-05-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464099#comment-16464099
 ] 

genericqa commented on HDFS-12666:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HDFS-12666 does not apply to HDFS-12090. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-12666 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922071/HDFS-12666-HDFS-12090.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24138/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [PROVIDED Phase 2] Provided Storage Mount Manager (PSMM) mount
> --
>
> Key: HDFS-12666
> URL: https://issues.apache.org/jira/browse/HDFS-12666
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-12666-HDFS-12090.001.patch
>
>
> Implement the Provided Storage Mount Manager. This is a service (thread) in 
> the Namenode that manages backup mounts, unmounts, snapshotting, and 
> monitoring the progress of backups.
> On mount, the mount manager writes XATTR information at the top level of the 
> mount to do the appropriate bookkeeping. This is done to maintain state in 
> case the Namenode falls over.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13522) Support observer node from Router-Based Federation

2018-05-04 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464096#comment-16464096
 ] 

Íñigo Goiri commented on HDFS-13522:


Thanks [~csun] for the clarification; that makes sense.
As a simplistic approach we could just ignore what the client does and let the 
Router decide what to use.
However, this may not fit the model being envisioned by HDFS-12976.
I would finish HDFS-12976 first and based on that we may go fancier and use 
some caller context or RPC header.
Anyway, while doing HDFS-12976, just keep in mind this possible requirement.

> Support observer node from Router-Based Federation
> --
>
> Key: HDFS-13522
> URL: https://issues.apache.org/jira/browse/HDFS-13522
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, namenode
>Reporter: Erik Krogen
>Priority: Major
>
> Changes will need to occur to the router to support the new observer node.
> One such change will be to make the router understand the observer state, 
> e.g. {{FederationNamenodeServiceState}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13528) RBF: If a directory exceeds quota limit then quota usage is not refreshed for other mount entries

2018-05-04 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464089#comment-16464089
 ] 

Íñigo Goiri commented on HDFS-13528:


A couple somewat related JIRAs are HDFS-13346 and HDFS-13380.
[~linyiqun], you are more familiar about the quotas.
Do you mind tracking this?

> RBF: If a directory exceeds quota limit then quota usage is not refreshed for 
> other mount entries 
> --
>
> Key: HDFS-13528
> URL: https://issues.apache.org/jira/browse/HDFS-13528
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Dibyendu Karmakar
>Priority: Major
>
> If quota limit is exceeded, RouterQuotaUpdateService#periodicInvoke is 
> getting QuotaExceededException and it is not updating the quota usage for 
> rest of the mount table entries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13528) RBF: If a directory exceeds quota limit then quota usage is not refreshed for other mount entries

2018-05-04 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464089#comment-16464089
 ] 

Íñigo Goiri edited comment on HDFS-13528 at 5/4/18 4:23 PM:


A couple somewhat related JIRAs are HDFS-13346 and HDFS-13380.
[~linyiqun], you are more familiar about the quotas.
Do you mind tracking this?


was (Author: elgoiri):
A couple somewat related JIRAs are HDFS-13346 and HDFS-13380.
[~linyiqun], you are more familiar about the quotas.
Do you mind tracking this?

> RBF: If a directory exceeds quota limit then quota usage is not refreshed for 
> other mount entries 
> --
>
> Key: HDFS-13528
> URL: https://issues.apache.org/jira/browse/HDFS-13528
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Dibyendu Karmakar
>Priority: Major
>
> If quota limit is exceeded, RouterQuotaUpdateService#periodicInvoke is 
> getting QuotaExceededException and it is not updating the quota usage for 
> rest of the mount table entries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13528) RBF: If a directory exceeds quota limit then quota usage is not refreshed for other mount entries

2018-05-04 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13528:
---
Summary: RBF: If a directory exceeds quota limit then quota usage is not 
refreshed for other mount entries   (was: If a directory exceeds quota limit 
then quota usage is not refreshed for other mount entries )

> RBF: If a directory exceeds quota limit then quota usage is not refreshed for 
> other mount entries 
> --
>
> Key: HDFS-13528
> URL: https://issues.apache.org/jira/browse/HDFS-13528
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Dibyendu Karmakar
>Priority: Major
>
> If quota limit is exceeded, RouterQuotaUpdateService#periodicInvoke is 
> getting QuotaExceededException and it is not updating the quota usage for 
> rest of the mount table entries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12666) [PROVIDED Phase 2] Provided Storage Mount Manager (PSMM) mount

2018-05-04 Thread Ewan Higgs (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HDFS-12666:
--
Assignee: Ewan Higgs
  Status: Patch Available  (was: Open)

Attaching patch that implements the Mount Manager along with the tracking 
system which will send DNA_BACKUP commands to the datanode.

This was rebased onto 9d7a9031a5978efc8d97566e35ebaace20db2353 with the 
following two patches applied first:

HDFS-13186.004.patch

HDFS-13310-HDFS-12090.002.patch

> [PROVIDED Phase 2] Provided Storage Mount Manager (PSMM) mount
> --
>
> Key: HDFS-12666
> URL: https://issues.apache.org/jira/browse/HDFS-12666
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-12666-HDFS-12090.001.patch
>
>
> Implement the Provided Storage Mount Manager. This is a service (thread) in 
> the Namenode that manages backup mounts, unmounts, snapshotting, and 
> monitoring the progress of backups.
> On mount, the mount manager writes XATTR information at the top level of the 
> mount to do the appropriate bookkeeping. This is done to maintain state in 
> case the Namenode falls over.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-23) Remove SCMNodeAddressList from SCMRegisterRequestProto

2018-05-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-23?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464064#comment-16464064
 ] 

genericqa commented on HDDS-23:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
56s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 46m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 20s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
22s{color} | {color:red} hadoop-hdds/server-scm in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 53s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
0s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
13s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
49s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12921963/HDDS-23.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux 4a85e95b782a 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a3b416f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| fin

[jira] [Updated] (HDFS-12666) [PROVIDED Phase 2] Provided Storage Mount Manager (PSMM) mount

2018-05-04 Thread Ewan Higgs (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HDFS-12666:
--
Attachment: HDFS-12666-HDFS-12090.001.patch

> [PROVIDED Phase 2] Provided Storage Mount Manager (PSMM) mount
> --
>
> Key: HDFS-12666
> URL: https://issues.apache.org/jira/browse/HDFS-12666
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Priority: Major
> Attachments: HDFS-12666-HDFS-12090.001.patch
>
>
> Implement the Provided Storage Mount Manager. This is a service (thread) in 
> the Namenode that manages backup mounts, unmounts, snapshotting, and 
> monitoring the progress of backups.
> On mount, the mount manager writes XATTR information at the top level of the 
> mount to do the appropriate bookkeeping. This is done to maintain state in 
> case the Namenode falls over.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-20) Ozone: Add support for rename key within a bucket for rpc client

2018-05-04 Thread Lokesh Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-20?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464055#comment-16464055
 ] 

Lokesh Jain commented on HDDS-20:
-

HDDS-20.001.patch fixes ozone contract tests.

> Ozone: Add support for rename key within a bucket for rpc client
> 
>
> Key: HDDS-20
> URL: https://issues.apache.org/jira/browse/HDDS-20
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-20.001.patch, HDFS-13228-HDFS-7240.001.patch
>
>
> This jira aims to implement rename operation on a key within a bucket for rpc 
> client. OzoneFilesystem currently rewrites a key on rename. Addition of this 
> operation would simplify renames in OzoneFilesystem as renames would just be 
> a db update in ksm.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-20) Ozone: Add support for rename key within a bucket for rpc client

2018-05-04 Thread Lokesh Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-20?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-20:

Attachment: HDDS-20.001.patch

> Ozone: Add support for rename key within a bucket for rpc client
> 
>
> Key: HDDS-20
> URL: https://issues.apache.org/jira/browse/HDDS-20
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-20.001.patch, HDFS-13228-HDFS-7240.001.patch
>
>
> This jira aims to implement rename operation on a key within a bucket for rpc 
> client. OzoneFilesystem currently rewrites a key on rename. Addition of this 
> operation would simplify renames in OzoneFilesystem as renames would just be 
> a db update in ksm.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13486) Backport HDFS-11817 to branch-2.7

2018-05-04 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464045#comment-16464045
 ] 

Wei-Chiu Chuang commented on HDFS-13486:


None of the test failures is related. White space warning unrelated either.
Will commit rev 003 by end of day. Please shout out if you have concerns.

> Backport HDFS-11817 to branch-2.7
> -
>
> Key: HDFS-13486
> URL: https://issues.apache.org/jira/browse/HDFS-13486
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HDFS-11817.branch-2.7.001.patch, 
> HDFS-11817.branch-2.7.002.patch
>
>
> HDFS-11817 is a good fix to have in branch-2.7.
> I'm taking a stab at it now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-17) Add node to container map class to simplify state in SCM

2018-05-04 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-17?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16463995#comment-16463995
 ] 

Nanda kumar commented on HDDS-17:
-

Thanks [~anu] for working on this. The patch looks good to me, some minor 
suggestions.
* Instead of {{Long}} we can use 
{{org.apache.hadoop.hdds.scm.container.ContainerID}}
* In {{Node2ContainerMap#updateDatanodeMap}} can we throw exception if the node 
is not already present.

> Add node to container map class to simplify state in SCM
> 
>
> Key: HDDS-17
> URL: https://issues.apache.org/jira/browse/HDDS-17
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-17.001.patch
>
>
> Current SCM state map is maintained in nodeStateManager. This first of 
> several refactoring to make it independent and small classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13245) RBF: State store DBMS implementation

2018-05-04 Thread Yiran Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiran Wu updated HDFS-13245:

Status: Patch Available  (was: Open)

> RBF: State store DBMS implementation
> 
>
> Key: HDFS-13245
> URL: https://issues.apache.org/jira/browse/HDFS-13245
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: maobaolong
>Assignee: Yiran Wu
>Priority: Major
> Attachments: HDFS-13245.001.patch, HDFS-13245.002.patch, 
> HDFS-13245.003.patch, HDFS-13245.004.patch, HDFS-13245.005.patch, 
> HDFS-13245.006.patch, HDFS-13245.007.patch, HDFS-13245.008.patch
>
>
> Add a DBMS implementation for the State Store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-18) Ozone: Ozone Shell should use RestClient and RpcClient

2018-05-04 Thread Lokesh Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-18?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16463994#comment-16463994
 ] 

Lokesh Jain commented on HDDS-18:
-

HDDS-18.002.patch fixes the findbugs issues.

> Ozone: Ozone Shell should use RestClient and RpcClient
> --
>
> Key: HDDS-18
> URL: https://issues.apache.org/jira/browse/HDDS-18
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-18.001.patch, HDDS-18.002.patch, 
> HDFS-13431-HDFS-7240.001.patch, HDFS-13431-HDFS-7240.002.patch, 
> HDFS-13431-HDFS-7240.003.patch, HDFS-13431.001.patch, HDFS-13431.002.patch
>
>
> Currently Ozone Shell uses OzoneRestClient. We should use both RestClient and 
> RpcClient instead of OzoneRestClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13245) RBF: State store DBMS implementation

2018-05-04 Thread Yiran Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiran Wu updated HDFS-13245:

Attachment: (was: HDFS-13245.008.patch)

> RBF: State store DBMS implementation
> 
>
> Key: HDFS-13245
> URL: https://issues.apache.org/jira/browse/HDFS-13245
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: maobaolong
>Assignee: Yiran Wu
>Priority: Major
> Attachments: HDFS-13245.001.patch, HDFS-13245.002.patch, 
> HDFS-13245.003.patch, HDFS-13245.004.patch, HDFS-13245.005.patch, 
> HDFS-13245.006.patch, HDFS-13245.007.patch, HDFS-13245.008.patch
>
>
> Add a DBMS implementation for the State Store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-18) Ozone: Ozone Shell should use RestClient and RpcClient

2018-05-04 Thread Lokesh Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-18?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-18:

Attachment: HDDS-18.002.patch

> Ozone: Ozone Shell should use RestClient and RpcClient
> --
>
> Key: HDDS-18
> URL: https://issues.apache.org/jira/browse/HDDS-18
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-18.001.patch, HDDS-18.002.patch, 
> HDFS-13431-HDFS-7240.001.patch, HDFS-13431-HDFS-7240.002.patch, 
> HDFS-13431-HDFS-7240.003.patch, HDFS-13431.001.patch, HDFS-13431.002.patch
>
>
> Currently Ozone Shell uses OzoneRestClient. We should use both RestClient and 
> RpcClient instead of OzoneRestClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13245) RBF: State store DBMS implementation

2018-05-04 Thread Yiran Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiran Wu updated HDFS-13245:

Attachment: HDFS-13245.008.patch

> RBF: State store DBMS implementation
> 
>
> Key: HDFS-13245
> URL: https://issues.apache.org/jira/browse/HDFS-13245
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: maobaolong
>Assignee: Yiran Wu
>Priority: Major
> Attachments: HDFS-13245.001.patch, HDFS-13245.002.patch, 
> HDFS-13245.003.patch, HDFS-13245.004.patch, HDFS-13245.005.patch, 
> HDFS-13245.006.patch, HDFS-13245.007.patch, HDFS-13245.008.patch
>
>
> Add a DBMS implementation for the State Store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13245) RBF: State store DBMS implementation

2018-05-04 Thread Yiran Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiran Wu updated HDFS-13245:

Status: Open  (was: Patch Available)

> RBF: State store DBMS implementation
> 
>
> Key: HDFS-13245
> URL: https://issues.apache.org/jira/browse/HDFS-13245
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: maobaolong
>Assignee: Yiran Wu
>Priority: Major
> Attachments: HDFS-13245.001.patch, HDFS-13245.002.patch, 
> HDFS-13245.003.patch, HDFS-13245.004.patch, HDFS-13245.005.patch, 
> HDFS-13245.006.patch, HDFS-13245.007.patch, HDFS-13245.008.patch
>
>
> Add a DBMS implementation for the State Store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-19) Ozone: Update ozone to latest ratis snapshot build (0.1.1-alpha-4309324-SNAPSHOT)

2018-05-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-19?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16463928#comment-16463928
 ] 

genericqa commented on HDDS-19:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 35m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
37s{color} | {color:red} hadoop-hdds/common in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
39s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
53s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 46m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 46m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
6s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
51s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
4s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
28s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}181m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Imag

[jira] [Updated] (HDDS-23) Remove SCMNodeAddressList from SCMRegisterRequestProto

2018-05-04 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-23?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-23:

Status: Patch Available  (was: Open)

> Remove SCMNodeAddressList from SCMRegisterRequestProto
> --
>
> Key: HDDS-23
> URL: https://issues.apache.org/jira/browse/HDDS-23
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode, SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-23.000.patch
>
>
> {{SCMNodeAddressList}} in {{SCMRegisterRequestProto}} is not used by SCM and 
> it's not necessary to send it in register call of datanode. 
> {{SCMNodeAddressList}} can be removed from {{SCMRegisterRequestProto}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-23) Remove SCMNodeAddressList from SCMRegisterRequestProto

2018-05-04 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-23?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-23:

Attachment: HDDS-23.000.patch

> Remove SCMNodeAddressList from SCMRegisterRequestProto
> --
>
> Key: HDDS-23
> URL: https://issues.apache.org/jira/browse/HDDS-23
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode, SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-23.000.patch
>
>
> {{SCMNodeAddressList}} in {{SCMRegisterRequestProto}} is not used by SCM and 
> it's not necessary to send it in register call of datanode. 
> {{SCMNodeAddressList}} can be removed from {{SCMRegisterRequestProto}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-23) Remove SCMNodeAddressList from SCMRegisterRequestProto

2018-05-04 Thread Nanda kumar (JIRA)
Nanda kumar created HDDS-23:
---

 Summary: Remove SCMNodeAddressList from SCMRegisterRequestProto
 Key: HDDS-23
 URL: https://issues.apache.org/jira/browse/HDDS-23
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Datanode, SCM
Reporter: Nanda kumar
Assignee: Nanda kumar


{{SCMNodeAddressList}} in {{SCMRegisterRequestProto}} is not used by SCM and 
it's not necessary to send it in register call of datanode. 
{{SCMNodeAddressList}} can be removed from {{SCMRegisterRequestProto}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-22) Restructure SCM - Datanode protocol

2018-05-04 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-22?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-22 started by Nanda kumar.
---
> Restructure SCM - Datanode protocol
> ---
>
> Key: HDDS-22
> URL: https://issues.apache.org/jira/browse/HDDS-22
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode, SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>
> This jira aims at properly defining the SCM - Datanode protocol.
> *EBNF of Heartbeat*
> {noformat}
> Heartbeat ::= DatanodeDetails | NodeReport | ContainerReports | 
> DeltaContainerReports | PipelineReports
>   DatanodeDetails ::= UUID | IpAddress | Hostname | Port
> Port ::= Type | Value
>   NodeReport ::= NodeIOStats | StorageReports
> NodeIOStats ::= ContainerOps | KeyOps | ChunkOps
>   ContainerOps ::= CreateCount | DeleteCount| GetInfoCount
>   KeyOps ::= putKeyCount | getKeyCount | DeleteKeyCount | ListKeyCount
>   ChunkOps ::= WriteChunkCount | ReadChunkCount | DeleteChunkCount
> StorageReports ::= zero or more StorageReport 
>   StorageReport ::= StorageID | Health | Used | Available | VolumeIOStats
> Health ::= Status | ErrorCode | Message
> VolumeIOStats ::= ReadBytes | ReadOpCount | WriteBytes | WriteOpCount 
> | ReadTime | WriteTime
>   ContainerReports ::= zero or more ContainerReport
> ContainerReport ::= ContainerID | finalHash | size | used | keyCount |  
> Name |  LifeCycleState | ContainerIOStats 
>   ContainerIOStats ::= readCount| writeCount| readBytes| writeBytes
>   DeltaContainerReports ::= ContainerID | Used
>   PipelineReport ::= PipelineID | Members | RatisChange | ChangeTimeStamp | 
> EpochID | LogStats | LogFailed
> RatisChange ::= NodeAdded | NodeRemoved | DeadNode | NewLeaderElected | 
> EpochChanged
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-22) Restructure SCM - Datanode protocol

2018-05-04 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-22?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-22:

Description: 
This jira aims at properly defining the SCM - Datanode protocol.

*EBNF of Heartbeat*
{noformat}
Heartbeat ::= DatanodeDetails | NodeReport | ContainerReports | 
DeltaContainerReports | PipelineReports
  DatanodeDetails ::= UUID | IpAddress | Hostname | Port
Port ::= Type | Value
  NodeReport ::= NodeIOStats | StorageReports
NodeIOStats ::= ContainerOps | KeyOps | ChunkOps
  ContainerOps ::= CreateCount | DeleteCount| GetInfoCount
  KeyOps ::= putKeyCount | getKeyCount | DeleteKeyCount | ListKeyCount
  ChunkOps ::= WriteChunkCount | ReadChunkCount | DeleteChunkCount
StorageReports ::= zero or more StorageReport   
  StorageReport ::= StorageID | Health | Used | Available | VolumeIOStats
Health ::= Status | ErrorCode | Message
VolumeIOStats ::= ReadBytes | ReadOpCount | WriteBytes | WriteOpCount | 
ReadTime | WriteTime
  ContainerReports ::= zero or more ContainerReport
ContainerReport ::= ContainerID | finalHash | size | used | keyCount |  
Name |  LifeCycleState | ContainerIOStats   
  ContainerIOStats ::= readCount| writeCount| readBytes| writeBytes
  DeltaContainerReports ::= ContainerID | Used
  PipelineReport ::= PipelineID | Members | RatisChange | ChangeTimeStamp | 
EpochID | LogStats | LogFailed
RatisChange ::= NodeAdded | NodeRemoved | DeadNode | NewLeaderElected | 
EpochChanged
{noformat}

  was:
This jira aims at properly defining the SCM - Datanode protocol.

*EBNF of Heartbeat*
{noformat}
Heartbeat ::= DatanodeDetails | NodeReport | ContainerReports | 
DeltaContainerReports | PipelineReports
  DatanodeDetails ::= UUID | IpAddress | Hostname | Port
Port ::= Type | Value
  NodeReport ::= NodeIOStats | StorageReports
NodeIOStats ::= ContainerOps | KeyOps | ChunkOps
  ContainerOps ::= CreateCount | DeleteCount| GetInfoCount
  KeyOps ::= putKeyCount | getKeyCount | DeleteKeyCount | ListKeyCount
  ChunkOps ::= WriteChunkCount | ReadChunkCount | DeleteChunkCount
StorageReports ::= zero or more StorageReport   
  StorageReport ::= StorageID | Health | Used | Available | VolumeIOStats
Health ::= Status | ErrorCode | Message
VolumeIOStats ::= ReadBytes | ReadOpCount | WriteBytes | WriteOpCount | 
ReadTime | WriteTime
  ContainerReports ::= zero or more ContainerReport
ContainerReport ::= ContainerID | finalHash | size | used | keyCount |  
Name |  LifeCycleState | ContainerIOStats   
  ContainerIOStats ::= readCount| writeCount| readBytes| writeBytes
  DeltaContainerReports ::= ContainerID | Used
  PipelineReport::= PipelineID | Members | RatisChange | ChangeTimeStamp | 
EpochID | LogStats | LogFailed
RatisChange ::= NodeAdded | NodeRemoved | DeadNode | NewLeaderElected | 
EpochChanged
{noformat}


> Restructure SCM - Datanode protocol
> ---
>
> Key: HDDS-22
> URL: https://issues.apache.org/jira/browse/HDDS-22
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode, SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>
> This jira aims at properly defining the SCM - Datanode protocol.
> *EBNF of Heartbeat*
> {noformat}
> Heartbeat ::= DatanodeDetails | NodeReport | ContainerReports | 
> DeltaContainerReports | PipelineReports
>   DatanodeDetails ::= UUID | IpAddress | Hostname | Port
> Port ::= Type | Value
>   NodeReport ::= NodeIOStats | StorageReports
> NodeIOStats ::= ContainerOps | KeyOps | ChunkOps
>   ContainerOps ::= CreateCount | DeleteCount| GetInfoCount
>   KeyOps ::= putKeyCount | getKeyCount | DeleteKeyCount | ListKeyCount
>   ChunkOps ::= WriteChunkCount | ReadChunkCount | DeleteChunkCount
> StorageReports ::= zero or more StorageReport 
>   StorageReport ::= StorageID | Health | Used | Available | VolumeIOStats
> Health ::= Status | ErrorCode | Message
> VolumeIOStats ::= ReadBytes | ReadOpCount | WriteBytes | WriteOpCount 
> | ReadTime | WriteTime
>   ContainerReports ::= zero or more ContainerReport
> ContainerReport ::= ContainerID | finalHash | size | used | keyCount |  
> Name |  LifeCycleState | ContainerIOStats 
>   ContainerIOStats ::= readCount| writeCount| readBytes| writeBytes
>   DeltaContainerReports ::= ContainerID | Used
>   PipelineReport ::= PipelineID | Members | RatisChange | ChangeTimeStamp | 
> EpochID | LogStats | LogFailed
> RatisChange ::= NodeAdded | NodeRemoved | DeadNode | NewLeaderElected | 
> EpochChanged
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issue

[jira] [Updated] (HDDS-22) Restructure SCM - Datanode protocol

2018-05-04 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-22?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-22:

Description: 
This jira aims at properly defining the SCM - Datanode protocol.

*EBNF of Heartbeat*
{noformat}
Heartbeat ::= DatanodeDetails | NodeReport | ContainerReports | 
DeltaContainerReports | PipelineReports
  DatanodeDetails ::= UUID | IpAddress | Hostname | Port
Port ::= Type | Value
  NodeReport ::= NodeIOStats | StorageReports
NodeIOStats ::= ContainerOps | KeyOps | ChunkOps
  ContainerOps ::= CreateCount | DeleteCount| GetInfoCount
  KeyOps ::= putKeyCount | getKeyCount | DeleteKeyCount | ListKeyCount
  ChunkOps ::= WriteChunkCount | ReadChunkCount | DeleteChunkCount
StorageReports ::= zero or more StorageReport   
  StorageReport ::= StorageID | Health | Used | Available | VolumeIOStats
Health ::= Status | ErrorCode | Message
VolumeIOStats ::= ReadBytes | ReadOpCount | WriteBytes | WriteOpCount | 
ReadTime | WriteTime
  ContainerReports ::= zero or more ContainerReport
ContainerReport ::= ContainerID | finalHash | size | used | keyCount |  
Name |  LifeCycleState | ContainerIOStats   
  ContainerIOStats ::= readCount| writeCount| readBytes| writeBytes
  DeltaContainerReports ::= ContainerID | Used
  PipelineReport::= PipelineID | Members | RatisChange | ChangeTimeStamp | 
EpochID | LogStats | LogFailed
RatisChange ::= NodeAdded | NodeRemoved | DeadNode | NewLeaderElected | 
EpochChanged
{noformat}

  was:
This jira aims at properly defining the SCM - Datanode protocol.
{noformat}
Heartbeat ::= DatanodeDetails | NodeReport | ContainerReports | 
DeltaContainerReports | PipelineReports
  DatanodeDetails ::= UUID | IpAddress | Hostname | Port
Port ::= Type | Value
  NodeReport ::= NodeIOStats | StorageReports
NodeIOStats ::= ContainerOps | KeyOps | ChunkOps
  ContainerOps ::= CreateCount | DeleteCount| GetInfoCount
  KeyOps ::= putKeyCount | getKeyCount | DeleteKeyCount | ListKeyCount
  ChunkOps ::= WriteChunkCount | ReadChunkCount | DeleteChunkCount
StorageReports ::= zero or more StorageReport   
  StorageReport ::= StorageID | Health | Used | Available | VolumeIOStats
Health ::= Status | ErrorCode | Message
VolumeIOStats ::= ReadBytes | ReadOpCount | WriteBytes | WriteOpCount | 
ReadTime | WriteTime
  ContainerReports ::= zero or more ContainerReport
ContainerReport ::= ContainerID | finalHash | size | used | keyCount |  
Name |  LifeCycleState | ContainerIOStats   
  ContainerIOStats ::= readCount| writeCount| readBytes| writeBytes
  DeltaContainerReports ::= ContainerID | Used
  PipelineReport::= PipelineID | Members | RatisChange | ChangeTimeStamp | 
EpochID | LogStats | LogFailed
RatisChange ::= NodeAdded | NodeRemoved | DeadNode | NewLeaderElected | 
EpochChanged
{noformat}


> Restructure SCM - Datanode protocol
> ---
>
> Key: HDDS-22
> URL: https://issues.apache.org/jira/browse/HDDS-22
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode, SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>
> This jira aims at properly defining the SCM - Datanode protocol.
> *EBNF of Heartbeat*
> {noformat}
> Heartbeat ::= DatanodeDetails | NodeReport | ContainerReports | 
> DeltaContainerReports | PipelineReports
>   DatanodeDetails ::= UUID | IpAddress | Hostname | Port
> Port ::= Type | Value
>   NodeReport ::= NodeIOStats | StorageReports
> NodeIOStats ::= ContainerOps | KeyOps | ChunkOps
>   ContainerOps ::= CreateCount | DeleteCount| GetInfoCount
>   KeyOps ::= putKeyCount | getKeyCount | DeleteKeyCount | ListKeyCount
>   ChunkOps ::= WriteChunkCount | ReadChunkCount | DeleteChunkCount
> StorageReports ::= zero or more StorageReport 
>   StorageReport ::= StorageID | Health | Used | Available | VolumeIOStats
> Health ::= Status | ErrorCode | Message
> VolumeIOStats ::= ReadBytes | ReadOpCount | WriteBytes | WriteOpCount 
> | ReadTime | WriteTime
>   ContainerReports ::= zero or more ContainerReport
> ContainerReport ::= ContainerID | finalHash | size | used | keyCount |  
> Name |  LifeCycleState | ContainerIOStats 
>   ContainerIOStats ::= readCount| writeCount| readBytes| writeBytes
>   DeltaContainerReports ::= ContainerID | Used
>   PipelineReport::= PipelineID | Members | RatisChange | ChangeTimeStamp | 
> EpochID | LogStats | LogFailed
> RatisChange ::= NodeAdded | NodeRemoved | DeadNode | NewLeaderElected | 
> EpochChanged
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.ap

[jira] [Updated] (HDDS-22) Restructure SCM - Datanode protocol

2018-05-04 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-22?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-22:

Description: 
This jira aims at properly defining the SCM - Datanode protocol.
{noformat}
Heartbeat ::= DatanodeDetails | NodeReport | ContainerReports | 
DeltaContainerReports | PipelineReports
  DatanodeDetails ::= UUID | IpAddress | Hostname | Port
Port ::= Type | Value
  NodeReport ::= NodeIOStats | StorageReports
NodeIOStats ::= ContainerOps | KeyOps | ChunkOps
  ContainerOps ::= CreateCount | DeleteCount| GetInfoCount
  KeyOps ::= putKeyCount | getKeyCount | DeleteKeyCount | ListKeyCount
  ChunkOps ::= WriteChunkCount | ReadChunkCount | DeleteChunkCount
StorageReports ::= zero or more StorageReport   
  StorageReport ::= StorageID | Health | Used | Available | VolumeIOStats
Health ::= Status | ErrorCode | Message
VolumeIOStats ::= ReadBytes | ReadOpCount | WriteBytes | WriteOpCount | 
ReadTime | WriteTime
  ContainerReports ::= zero or more ContainerReport
ContainerReport ::= ContainerID | finalHash | size | used | keyCount |  
Name |  LifeCycleState | ContainerIOStats   
  ContainerIOStats ::= readCount| writeCount| readBytes| writeBytes
  DeltaContainerReports ::= ContainerID | Used
  PipelineReport::= PipelineID | Members | RatisChange | ChangeTimeStamp | 
EpochID | LogStats | LogFailed
RatisChange ::= NodeAdded | NodeRemoved | DeadNode | NewLeaderElected | 
EpochChanged
{noformat}

  was:
This jira aims at properly defining the SCM - Datanode protocol.
{code}
Heartbeat ::= DatanodeDetails | NodeReport | ContainerReports | 
DeltaContainerReports | PipelineReports

  DatanodeDetails ::= UUID | IpAddress | Hostname | Port

Port ::= Type | Value

  NodeReport ::= NodeIOStats | StorageReports

NodeIOStats ::= ContainerOps | KeyOps | ChunkOps

  ContainerOps ::= CreateCount | DeleteCount| GetInfoCount

  KeyOps ::= putKeyCount | getKeyCount | DeleteKeyCount | ListKeyCount

  ChunkOps ::= WriteChunkCount | ReadChunkCount | DeleteChunkCount

StorageReports ::= zero or more StorageReport

StorageReport ::= StorageID | Health | Used | Available | VolumeIOStats

StorageID ::= UUID

Health ::= Status | ErrorCode | Message

VolumeIOStats ::= ReadBytes | ReadOpCount | 
WriteBytes | WriteOpCount | ReadTime | WriteTime

ContainerReports ::= zero or more ContainerReport

ContainerReport ::= ContainerID | finalHash | size | used | 
keyCount |  Name |  LifeCycleState | ContainerIOStats

ContainerIOStats ::= readCount| writeCount| readBytes| 
writeBytes

DeltaContainerReports ::= ContainerID | Used

PipelineReport::= PipelineID | Members | RatisChange | ChangeTimeStamp 
| EpochID | LogStats | LogFailed

RatisChange ::= NodeAdded | NodeRemoved | DeadNode | 
NewLeaderElected | EpochChanged

{code}


> Restructure SCM - Datanode protocol
> ---
>
> Key: HDDS-22
> URL: https://issues.apache.org/jira/browse/HDDS-22
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode, SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>
> This jira aims at properly defining the SCM - Datanode protocol.
> {noformat}
> Heartbeat ::= DatanodeDetails | NodeReport | ContainerReports | 
> DeltaContainerReports | PipelineReports
>   DatanodeDetails ::= UUID | IpAddress | Hostname | Port
> Port ::= Type | Value
>   NodeReport ::= NodeIOStats | StorageReports
> NodeIOStats ::= ContainerOps | KeyOps | ChunkOps
>   ContainerOps ::= CreateCount | DeleteCount| GetInfoCount
>   KeyOps ::= putKeyCount | getKeyCount | DeleteKeyCount | ListKeyCount
>   ChunkOps ::= WriteChunkCount | ReadChunkCount | DeleteChunkCount
> StorageReports ::= zero or more StorageReport 
>   StorageReport ::= StorageID | Health | Used | Available | VolumeIOStats
> Health ::= Status | ErrorCode | Message
> VolumeIOStats ::= ReadBytes | ReadOpCount | WriteBytes | WriteOpCount 
> | ReadTime | WriteTime
>   ContainerReports ::= zero or more ContainerReport
> ContainerReport ::= ContainerID | finalHash | size | used | keyCount |  
> Name |  LifeCycleState | ContainerIOStats 
>   ContainerIOStats ::= readCount| writeCount| readBytes| writeBytes
>   DeltaContainerReports ::= ContainerID | Used
>   PipelineReport::= PipelineID | Members | RatisChange | ChangeTimeStamp | 
> EpochID | LogStats | LogFailed
> RatisChange ::= NodeAdded | NodeRemoved | DeadNode | NewLeaderElected | 
> EpochChanged
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

[jira] [Created] (HDDS-22) Restructure SCM - Datanode protocol

2018-05-04 Thread Nanda kumar (JIRA)
Nanda kumar created HDDS-22:
---

 Summary: Restructure SCM - Datanode protocol
 Key: HDDS-22
 URL: https://issues.apache.org/jira/browse/HDDS-22
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Datanode, SCM
Reporter: Nanda kumar
Assignee: Nanda kumar


This jira aims at properly defining the SCM - Datanode protocol.
{code}
Heartbeat ::= DatanodeDetails | NodeReport | ContainerReports | 
DeltaContainerReports | PipelineReports

  DatanodeDetails ::= UUID | IpAddress | Hostname | Port

Port ::= Type | Value

  NodeReport ::= NodeIOStats | StorageReports

NodeIOStats ::= ContainerOps | KeyOps | ChunkOps

  ContainerOps ::= CreateCount | DeleteCount| GetInfoCount

  KeyOps ::= putKeyCount | getKeyCount | DeleteKeyCount | ListKeyCount

  ChunkOps ::= WriteChunkCount | ReadChunkCount | DeleteChunkCount

StorageReports ::= zero or more StorageReport

StorageReport ::= StorageID | Health | Used | Available | VolumeIOStats

StorageID ::= UUID

Health ::= Status | ErrorCode | Message

VolumeIOStats ::= ReadBytes | ReadOpCount | 
WriteBytes | WriteOpCount | ReadTime | WriteTime

ContainerReports ::= zero or more ContainerReport

ContainerReport ::= ContainerID | finalHash | size | used | 
keyCount |  Name |  LifeCycleState | ContainerIOStats

ContainerIOStats ::= readCount| writeCount| readBytes| 
writeBytes

DeltaContainerReports ::= ContainerID | Used

PipelineReport::= PipelineID | Members | RatisChange | ChangeTimeStamp 
| EpochID | LogStats | LogFailed

RatisChange ::= NodeAdded | NodeRemoved | DeadNode | 
NewLeaderElected | EpochChanged

{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-18) Ozone: Ozone Shell should use RestClient and RpcClient

2018-05-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-18?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16463897#comment-16463897
 ] 

genericqa commented on HDDS-18:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test hadoop-ozone/acceptance-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
57s{color} | {color:red} hadoop-hdds/common in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
44s{color} | {color:red} hadoop-ozone/ozone-manager in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test hadoop-ozone/acceptance-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
52s{color} | {color:red} hadoop-ozone/ozone-manager generated 19 new + 1 
unchanged - 0 fixed = 20 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
9s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
15s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
40s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 39s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
24s{color} | {color:green} hadoop-ozone in the patch passed. {color} |
| {color:green}+1{color} | {color:gr

[jira] [Commented] (HDDS-18) Ozone: Ozone Shell should use RestClient and RpcClient

2018-05-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-18?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16463853#comment-16463853
 ] 

genericqa commented on HDDS-18:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
51s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
15s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
36s{color} | {color:red} client in trunk failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
28s{color} | {color:red} integration-test in trunk failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
24s{color} | {color:red} ozone-manager in trunk failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-ozone in trunk failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
3s{color} | {color:red} hadoop-hdds/common in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
22s{color} | {color:red} client in trunk failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
22s{color} | {color:red} ozone-manager in trunk failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-ozone in trunk failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
23s{color} | {color:red} client in trunk failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
23s{color} | {color:red} integration-test in trunk failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
24s{color} | {color:red} ozone-manager in trunk failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-ozone in trunk failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
11s{color} | {color:red} client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
10s{color} | {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
11s{color} | {color:red} ozone-manager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
11s{color} | {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 29m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
17s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
26s{color} | {color:red} client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
23s{color} | {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {co

[jira] [Commented] (HDFS-13528) If a directory exceeds quota limit then quota usage is not refreshed for other mount entries

2018-05-04 Thread Dibyendu Karmakar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16463842#comment-16463842
 ] 

Dibyendu Karmakar commented on HDFS-13528:
--

we can make needQuotaVerify as false for getQuotaRemoteLocations
{code:java}
private List getQuotaRemoteLocations(String path)
  throws IOException {
...
...
locations.addAll(rpcServer.getLocationsForPath(childPath, true, false));
...
{code}

> If a directory exceeds quota limit then quota usage is not refreshed for 
> other mount entries 
> -
>
> Key: HDFS-13528
> URL: https://issues.apache.org/jira/browse/HDFS-13528
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Dibyendu Karmakar
>Priority: Major
>
> If quota limit is exceeded, RouterQuotaUpdateService#periodicInvoke is 
> getting QuotaExceededException and it is not updating the quota usage for 
> rest of the mount table entries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-17) Add node to container map class to simplify state in SCM

2018-05-04 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-17?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16463837#comment-16463837
 ] 

Elek, Marton commented on HDDS-17:
--

Thanks @Anu, he patch itself looks good to me (except the minor whitespace 
issue).

((This is just the simple data class to store the information but the 
integration point (how it will be used) is not visible yet. For example 
currently there is just one ReportResult with all the expected state but they 
also could be in different event classes. It depends on the usage. But it's 
good and safe to commit it now and will see later if the usage requires 
changes. I guess you have more clean vision about it...)) 

> Add node to container map class to simplify state in SCM
> 
>
> Key: HDDS-17
> URL: https://issues.apache.org/jira/browse/HDDS-17
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-17.001.patch
>
>
> Current SCM state map is maintained in nodeStateManager. This first of 
> several refactoring to make it independent and small classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13528) If a directory exceeds quota limit then quota usage is not refreshed for other mount entries

2018-05-04 Thread Dibyendu Karmakar (JIRA)
Dibyendu Karmakar created HDFS-13528:


 Summary: If a directory exceeds quota limit then quota usage is 
not refreshed for other mount entries 
 Key: HDFS-13528
 URL: https://issues.apache.org/jira/browse/HDFS-13528
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Dibyendu Karmakar


If quota limit is exceeded, RouterQuotaUpdateService#periodicInvoke is getting 
QuotaExceededException and it is not updating the quota usage for rest of the 
mount table entries.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13528) If a directory exceeds quota limit then quota usage is not refreshed for other mount entries

2018-05-04 Thread Dibyendu Karmakar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dibyendu Karmakar reassigned HDFS-13528:


Assignee: Dibyendu Karmakar

> If a directory exceeds quota limit then quota usage is not refreshed for 
> other mount entries 
> -
>
> Key: HDFS-13528
> URL: https://issues.apache.org/jira/browse/HDFS-13528
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Dibyendu Karmakar
>Priority: Major
>
> If quota limit is exceeded, RouterQuotaUpdateService#periodicInvoke is 
> getting QuotaExceededException and it is not updating the quota usage for 
> rest of the mount table entries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Moved] (HDDS-21) Ozone: Add support for rename key within a bucket for rest client

2018-05-04 Thread Lokesh Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-21?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain moved HDFS-13229 to HDDS-21:


Component/s: (was: ozone)
   Workflow: patch-available, re-open possible  (was: no-reopen-closed, 
patch-avail)
Key: HDDS-21  (was: HDFS-13229)
Project: Hadoop Distributed Data Store  (was: Hadoop HDFS)

> Ozone: Add support for rename key within a bucket for rest client
> -
>
> Key: HDDS-21
> URL: https://issues.apache.org/jira/browse/HDDS-21
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDFS-13229-HDFS-7240.001.patch
>
>
> This jira aims to add support for rename key within a bucket for rest client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-20) Ozone: Add support for rename key within a bucket for rpc client

2018-05-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-20?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16463745#comment-16463745
 ] 

genericqa commented on HDDS-20:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HDDS-20 does not apply to HDFS-7240. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDDS-20 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913064/HDFS-13228-HDFS-7240.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/29/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: Add support for rename key within a bucket for rpc client
> 
>
> Key: HDDS-20
> URL: https://issues.apache.org/jira/browse/HDDS-20
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDFS-13228-HDFS-7240.001.patch
>
>
> This jira aims to implement rename operation on a key within a bucket for rpc 
> client. OzoneFilesystem currently rewrites a key on rename. Addition of this 
> operation would simplify renames in OzoneFilesystem as renames would just be 
> a db update in ksm.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13229) Ozone: Add support for rename key within a bucket for rest client

2018-05-04 Thread Lokesh Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDFS-13229:
---
Issue Type: Improvement  (was: Sub-task)
Parent: (was: HDFS-7240)

> Ozone: Add support for rename key within a bucket for rest client
> -
>
> Key: HDFS-13229
> URL: https://issues.apache.org/jira/browse/HDFS-13229
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ozone
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDFS-13229-HDFS-7240.001.patch
>
>
> This jira aims to add support for rename key within a bucket for rest client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Moved] (HDDS-20) Ozone: Add support for rename key within a bucket for rpc client

2018-05-04 Thread Lokesh Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-20?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain moved HDFS-13228 to HDDS-20:


Workflow: patch-available, re-open possible  (was: no-reopen-closed, 
patch-avail)
 Key: HDDS-20  (was: HDFS-13228)
 Project: Hadoop Distributed Data Store  (was: Hadoop HDFS)

> Ozone: Add support for rename key within a bucket for rpc client
> 
>
> Key: HDDS-20
> URL: https://issues.apache.org/jira/browse/HDDS-20
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDFS-13228-HDFS-7240.001.patch
>
>
> This jira aims to implement rename operation on a key within a bucket for rpc 
> client. OzoneFilesystem currently rewrites a key on rename. Addition of this 
> operation would simplify renames in OzoneFilesystem as renames would just be 
> a db update in ksm.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13228) Ozone: Add support for rename key within a bucket for rpc client

2018-05-04 Thread Lokesh Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDFS-13228:
---
Issue Type: Improvement  (was: Sub-task)
Parent: (was: HDFS-13074)

> Ozone: Add support for rename key within a bucket for rpc client
> 
>
> Key: HDFS-13228
> URL: https://issues.apache.org/jira/browse/HDFS-13228
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDFS-13228-HDFS-7240.001.patch
>
>
> This jira aims to implement rename operation on a key within a bucket for rpc 
> client. OzoneFilesystem currently rewrites a key on rename. Addition of this 
> operation would simplify renames in OzoneFilesystem as renames would just be 
> a db update in ksm.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-19) Ozone: Update ozone to latest ratis snapshot build (0.1.1-alpha-4309324-SNAPSHOT)

2018-05-04 Thread Lokesh Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-19?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-19:

Attachment: HDDS-19.001.patch

> Ozone: Update ozone to latest ratis snapshot build 
> (0.1.1-alpha-4309324-SNAPSHOT)
> -
>
> Key: HDDS-19
> URL: https://issues.apache.org/jira/browse/HDDS-19
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-19.001.patch, HDFS-13456-HDFS-7240.001.patch, 
> HDFS-13456-HDFS-7240.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Moved] (HDDS-19) Ozone: Update ozone to latest ratis snapshot build (0.1.1-alpha-4309324-SNAPSHOT)

2018-05-04 Thread Lokesh Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-19?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain moved HDFS-13456 to HDDS-19:


Component/s: (was: ozone)
   Workflow: patch-available, re-open possible  (was: no-reopen-closed, 
patch-avail)
Key: HDDS-19  (was: HDFS-13456)
Project: Hadoop Distributed Data Store  (was: Hadoop HDFS)

> Ozone: Update ozone to latest ratis snapshot build 
> (0.1.1-alpha-4309324-SNAPSHOT)
> -
>
> Key: HDDS-19
> URL: https://issues.apache.org/jira/browse/HDDS-19
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDFS-13456-HDFS-7240.001.patch, 
> HDFS-13456-HDFS-7240.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13456) Ozone: Update ozone to latest ratis snapshot build (0.1.1-alpha-4309324-SNAPSHOT)

2018-05-04 Thread Lokesh Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDFS-13456:
---
Issue Type: Improvement  (was: Sub-task)
Parent: (was: HDFS-7240)

> Ozone: Update ozone to latest ratis snapshot build 
> (0.1.1-alpha-4309324-SNAPSHOT)
> -
>
> Key: HDFS-13456
> URL: https://issues.apache.org/jira/browse/HDFS-13456
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDFS-13456-HDFS-7240.001.patch, 
> HDFS-13456-HDFS-7240.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-18) Ozone: Ozone Shell should use RestClient and RpcClient

2018-05-04 Thread Lokesh Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-18?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-18:

Attachment: HDDS-18.001.patch

> Ozone: Ozone Shell should use RestClient and RpcClient
> --
>
> Key: HDDS-18
> URL: https://issues.apache.org/jira/browse/HDDS-18
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-18.001.patch, HDFS-13431-HDFS-7240.001.patch, 
> HDFS-13431-HDFS-7240.002.patch, HDFS-13431-HDFS-7240.003.patch, 
> HDFS-13431.001.patch, HDFS-13431.002.patch
>
>
> Currently Ozone Shell uses OzoneRestClient. We should use both RestClient and 
> RpcClient instead of OzoneRestClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-18) Ozone: Ozone Shell should use RestClient and RpcClient

2018-05-04 Thread Lokesh Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-18?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16463714#comment-16463714
 ] 

Lokesh Jain edited comment on HDDS-18 at 5/4/18 11:01 AM:
--

[~nandakumar131] Thanks for reviewing the patch! HDDS-18.001.patch addresses 
your comments. 
In OzoneClientFactory now I am just logging the error message. Also I have 
refactored OzoneRestClientException to OzoneClientException.


was (Author: ljain):
[~nandakumar131] Thanks for reviewing the patch! HDFS-13431.003.patch addresses 
your comments. 
In OzoneClientFactory now I am just logging the error message. Also I have 
refactored OzoneRestClientException to OzoneClientException.

> Ozone: Ozone Shell should use RestClient and RpcClient
> --
>
> Key: HDDS-18
> URL: https://issues.apache.org/jira/browse/HDDS-18
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDFS-13431-HDFS-7240.001.patch, 
> HDFS-13431-HDFS-7240.002.patch, HDFS-13431-HDFS-7240.003.patch, 
> HDFS-13431.001.patch, HDFS-13431.002.patch
>
>
> Currently Ozone Shell uses OzoneRestClient. We should use both RestClient and 
> RpcClient instead of OzoneRestClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Moved] (HDDS-18) Ozone: Ozone Shell should use RestClient and RpcClient

2018-05-04 Thread Lokesh Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-18?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain moved HDFS-13431 to HDDS-18:


Workflow: patch-available, re-open possible  (was: no-reopen-closed, 
patch-avail)
 Key: HDDS-18  (was: HDFS-13431)
 Project: Hadoop Distributed Data Store  (was: Hadoop HDFS)

> Ozone: Ozone Shell should use RestClient and RpcClient
> --
>
> Key: HDDS-18
> URL: https://issues.apache.org/jira/browse/HDDS-18
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDFS-13431-HDFS-7240.001.patch, 
> HDFS-13431-HDFS-7240.002.patch, HDFS-13431-HDFS-7240.003.patch, 
> HDFS-13431.001.patch, HDFS-13431.002.patch
>
>
> Currently Ozone Shell uses OzoneRestClient. We should use both RestClient and 
> RpcClient instead of OzoneRestClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-18) Ozone: Ozone Shell should use RestClient and RpcClient

2018-05-04 Thread Lokesh Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-18?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-18:

Attachment: (was: HDFS-13431.003.patch)

> Ozone: Ozone Shell should use RestClient and RpcClient
> --
>
> Key: HDDS-18
> URL: https://issues.apache.org/jira/browse/HDDS-18
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDFS-13431-HDFS-7240.001.patch, 
> HDFS-13431-HDFS-7240.002.patch, HDFS-13431-HDFS-7240.003.patch, 
> HDFS-13431.001.patch, HDFS-13431.002.patch
>
>
> Currently Ozone Shell uses OzoneRestClient. We should use both RestClient and 
> RpcClient instead of OzoneRestClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13431) Ozone: Ozone Shell should use RestClient and RpcClient

2018-05-04 Thread Lokesh Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDFS-13431:
---
Issue Type: Improvement  (was: Sub-task)
Parent: (was: HDFS-7240)

> Ozone: Ozone Shell should use RestClient and RpcClient
> --
>
> Key: HDFS-13431
> URL: https://issues.apache.org/jira/browse/HDFS-13431
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDFS-13431-HDFS-7240.001.patch, 
> HDFS-13431-HDFS-7240.002.patch, HDFS-13431-HDFS-7240.003.patch, 
> HDFS-13431.001.patch, HDFS-13431.002.patch
>
>
> Currently Ozone Shell uses OzoneRestClient. We should use both RestClient and 
> RpcClient instead of OzoneRestClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13431) Ozone: Ozone Shell should use RestClient and RpcClient

2018-05-04 Thread Lokesh Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16463714#comment-16463714
 ] 

Lokesh Jain commented on HDFS-13431:


[~nandakumar131] Thanks for reviewing the patch! HDFS-13431.003.patch addresses 
your comments. 
In OzoneClientFactory now I am just logging the error message. Also I have 
refactored OzoneRestClientException to OzoneClientException.

> Ozone: Ozone Shell should use RestClient and RpcClient
> --
>
> Key: HDFS-13431
> URL: https://issues.apache.org/jira/browse/HDFS-13431
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDFS-13431-HDFS-7240.001.patch, 
> HDFS-13431-HDFS-7240.002.patch, HDFS-13431-HDFS-7240.003.patch, 
> HDFS-13431.001.patch, HDFS-13431.002.patch, HDFS-13431.003.patch
>
>
> Currently Ozone Shell uses OzoneRestClient. We should use both RestClient and 
> RpcClient instead of OzoneRestClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13431) Ozone: Ozone Shell should use RestClient and RpcClient

2018-05-04 Thread Lokesh Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDFS-13431:
---
Attachment: HDFS-13431.003.patch

> Ozone: Ozone Shell should use RestClient and RpcClient
> --
>
> Key: HDFS-13431
> URL: https://issues.apache.org/jira/browse/HDFS-13431
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDFS-13431-HDFS-7240.001.patch, 
> HDFS-13431-HDFS-7240.002.patch, HDFS-13431-HDFS-7240.003.patch, 
> HDFS-13431.001.patch, HDFS-13431.002.patch, HDFS-13431.003.patch
>
>
> Currently Ozone Shell uses OzoneRestClient. We should use both RestClient and 
> RpcClient instead of OzoneRestClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13245) RBF: State store DBMS implementation

2018-05-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16463693#comment-16463693
 ] 

genericqa commented on HDFS-13245:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HDFS-13245 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-13245 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12921919/HDFS-13245.008.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24135/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RBF: State store DBMS implementation
> 
>
> Key: HDFS-13245
> URL: https://issues.apache.org/jira/browse/HDFS-13245
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: maobaolong
>Assignee: Yiran Wu
>Priority: Major
> Attachments: HDFS-13245.001.patch, HDFS-13245.002.patch, 
> HDFS-13245.003.patch, HDFS-13245.004.patch, HDFS-13245.005.patch, 
> HDFS-13245.006.patch, HDFS-13245.007.patch, HDFS-13245.008.patch
>
>
> Add a DBMS implementation for the State Store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13245) RBF: State store DBMS implementation

2018-05-04 Thread Yiran Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiran Wu updated HDFS-13245:

Status: Open  (was: Patch Available)

> RBF: State store DBMS implementation
> 
>
> Key: HDFS-13245
> URL: https://issues.apache.org/jira/browse/HDFS-13245
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: maobaolong
>Assignee: Yiran Wu
>Priority: Major
> Attachments: HDFS-13245.001.patch, HDFS-13245.002.patch, 
> HDFS-13245.003.patch, HDFS-13245.004.patch, HDFS-13245.005.patch, 
> HDFS-13245.006.patch, HDFS-13245.007.patch, HDFS-13245.008.patch
>
>
> Add a DBMS implementation for the State Store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13245) RBF: State store DBMS implementation

2018-05-04 Thread Yiran Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiran Wu updated HDFS-13245:

Attachment: HDFS-13245.008.patch

> RBF: State store DBMS implementation
> 
>
> Key: HDFS-13245
> URL: https://issues.apache.org/jira/browse/HDFS-13245
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: maobaolong
>Assignee: Yiran Wu
>Priority: Major
> Attachments: HDFS-13245.001.patch, HDFS-13245.002.patch, 
> HDFS-13245.003.patch, HDFS-13245.004.patch, HDFS-13245.005.patch, 
> HDFS-13245.006.patch, HDFS-13245.007.patch, HDFS-13245.008.patch
>
>
> Add a DBMS implementation for the State Store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13245) RBF: State store DBMS implementation

2018-05-04 Thread Yiran Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiran Wu updated HDFS-13245:

Attachment: (was: HDFS-13245.008.patch)

> RBF: State store DBMS implementation
> 
>
> Key: HDFS-13245
> URL: https://issues.apache.org/jira/browse/HDFS-13245
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: maobaolong
>Assignee: Yiran Wu
>Priority: Major
> Attachments: HDFS-13245.001.patch, HDFS-13245.002.patch, 
> HDFS-13245.003.patch, HDFS-13245.004.patch, HDFS-13245.005.patch, 
> HDFS-13245.006.patch, HDFS-13245.007.patch, HDFS-13245.008.patch
>
>
> Add a DBMS implementation for the State Store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13245) RBF: State store DBMS implementation

2018-05-04 Thread Yiran Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiran Wu updated HDFS-13245:

Status: Patch Available  (was: Open)

> RBF: State store DBMS implementation
> 
>
> Key: HDFS-13245
> URL: https://issues.apache.org/jira/browse/HDFS-13245
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: maobaolong
>Assignee: Yiran Wu
>Priority: Major
> Attachments: HDFS-13245.001.patch, HDFS-13245.002.patch, 
> HDFS-13245.003.patch, HDFS-13245.004.patch, HDFS-13245.005.patch, 
> HDFS-13245.006.patch, HDFS-13245.007.patch, HDFS-13245.008.patch
>
>
> Add a DBMS implementation for the State Store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13245) RBF: State store DBMS implementation

2018-05-04 Thread Yiran Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiran Wu updated HDFS-13245:

Attachment: HDFS-13245.008.patch

> RBF: State store DBMS implementation
> 
>
> Key: HDFS-13245
> URL: https://issues.apache.org/jira/browse/HDFS-13245
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: maobaolong
>Assignee: Yiran Wu
>Priority: Major
> Attachments: HDFS-13245.001.patch, HDFS-13245.002.patch, 
> HDFS-13245.003.patch, HDFS-13245.004.patch, HDFS-13245.005.patch, 
> HDFS-13245.006.patch, HDFS-13245.007.patch, HDFS-13245.008.patch
>
>
> Add a DBMS implementation for the State Store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13245) RBF: State store DBMS implementation

2018-05-04 Thread Yiran Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiran Wu updated HDFS-13245:

Status: Patch Available  (was: Open)

> RBF: State store DBMS implementation
> 
>
> Key: HDFS-13245
> URL: https://issues.apache.org/jira/browse/HDFS-13245
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: maobaolong
>Assignee: Yiran Wu
>Priority: Major
> Attachments: HDFS-13245.001.patch, HDFS-13245.002.patch, 
> HDFS-13245.003.patch, HDFS-13245.004.patch, HDFS-13245.005.patch, 
> HDFS-13245.006.patch, HDFS-13245.007.patch, HDFS-13245.008.patch
>
>
> Add a DBMS implementation for the State Store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13245) RBF: State store DBMS implementation

2018-05-04 Thread Yiran Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiran Wu updated HDFS-13245:

Status: Open  (was: Patch Available)

> RBF: State store DBMS implementation
> 
>
> Key: HDFS-13245
> URL: https://issues.apache.org/jira/browse/HDFS-13245
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: maobaolong
>Assignee: Yiran Wu
>Priority: Major
> Attachments: HDFS-13245.001.patch, HDFS-13245.002.patch, 
> HDFS-13245.003.patch, HDFS-13245.004.patch, HDFS-13245.005.patch, 
> HDFS-13245.006.patch, HDFS-13245.007.patch
>
>
> Add a DBMS implementation for the State Store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13443) RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries.

2018-05-04 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16463660#comment-16463660
 ] 

Yiqun Lin edited comment on HDFS-13443 at 5/4/18 10:17 AM:
---

Some review comments from me (reviewed based on v007 patch):

Majors:
 * Refresh API won't not make sense for all the State Store, right? For 
example, if we use local file as the State Store, Refresh operation should only 
do in local Router and no need to invoke remote Routers.
 * We need to cover more test cases :

 # The test case for the behavior of updating cache timeout.
 # The test case for the behavior of RouterRpcClient connection expiration.
 # {{TestRouterAdminCLI#testRefreshMountTableCache}} can be completed more, 
like {{TestRouterMountTableCacheRefresh}} did.

Minors:
 * For more readability, can we rename {{LocalRouterMountTableRefresh}} to 
{{LocalRouterMountTableRefresher}}, {{RemoteRouterMountTableRefresh}} to 
{{RemoteRouterMountTableRefresher}}.

*MountTableRefreshService.java*
 * Line62: {{local}} update to {{Local}}.
 * Line71: Can we add comment for the mapping entry ?
 * Line162: We may need to trigger 
{{router.getRouterStateManager().loadCache(true)}} before getting RouterState 
records.
 * Line221: This error msg seems too simple.
 * Line247: {{succesCount={},failureCount=}} update to 
'succesCount={},failureCount={}'. One more suggestion: why not add metrics 
instead of counter in this class?

*MountTableRefreshThread.java*
 * Line51: It's will be better to print the exception thrown here.


was (Author: linyiqun):
Some review comments from me (reviewed based on v007 patch):

Majors:
 * Refresh API won't not make sense for all the State Store, right? For 
example, if we use local file as the State Store, Refresh operation should only 
do in local Router and no need to invoke remote Routers.
 * We need to cover more test cases :

 # The test case for the behavior of updating cache timeout.
 # The test case for the behavior of RouterRpcClient connection expiration.
 # {{TestRouterAdminCLI#testRefreshMountTableCache}} can be completed more, 
like {{TestRouterMountTableCacheRefresh}} did.

Minors:
 * For more readability, can we rename {{LocalRouterMountTableRefresh}} to 
{{LocalRouterMountTableRefresher}}, {{RemoteRouterMountTableRefresh}} to 
{{RemoteRouterMountTableRefresher}}.

*MountTableRefreshService.java*
 * Line62: {{local}} update to {{Local}}.
 * Line71: Can we add comment the mapping entry ?
 * Line162: We may need to trigger 
{{router.getRouterStateManager().loadCache(true)}} before getting RouterState 
records.
 * Line221: This error msg seems too simple.
 * Line247: {{succesCount={},failureCount=}} update to 
{{succesCount={},failureCount=}}. One more suggestion: why not add metrics 
instead of counter in this class?

*MountTableRefreshThread.java*
 * Line51: It's will be better to print the exception thrown here.

> RBF: Update mount table cache immediately after changing (add/update/remove) 
> mount table entries.
> -
>
> Key: HDFS-13443
> URL: https://issues.apache.org/jira/browse/HDFS-13443
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Mohammad Arshad
>Assignee: Mohammad Arshad
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13443-branch-2.001.patch, 
> HDFS-13443-branch-2.002.patch, HDFS-13443.001.patch, HDFS-13443.002.patch, 
> HDFS-13443.003.patch, HDFS-13443.004.patch, HDFS-13443.005.patch, 
> HDFS-13443.006.patch, HDFS-13443.007.patch
>
>
> Currently mount table cache is updated periodically, by default cache is 
> updated every minute. After change in mount table, user operations may still 
> use old mount table. This is bit wrong.
> To update mount table cache, maybe we can do following
>  * *Add refresh API in MountTableManager which will update mount table cache.*
>  * *When there is a change in mount table entries, router admin server can 
> update its cache and ask other routers to update their cache*. For example if 
> there are three routers R1,R2,R3 in a cluster then add mount table entry API, 
> at admin server side, will perform following sequence of action
>  ## user submit add mount table entry request on R1
>  ## R1 adds the mount table entry in state store
>  ## R1 call refresh API on R2
>  ## R1 calls refresh API on R3
>  ## R1 directly freshest its cache
>  ## Add mount table entry response send back to user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13443) RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries.

2018-05-04 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16463660#comment-16463660
 ] 

Yiqun Lin edited comment on HDFS-13443 at 5/4/18 10:16 AM:
---

Some review comments from me (reviewed based on v007 patch):

Majors:
 * Refresh API won't not make sense for all the State Store, right? For 
example, if we use local file as the State Store, Refresh operation should only 
do in local Router and no need to invoke remote Routers.
 * We need to cover more test cases :

 # The test case for the behavior of updating cache timeout.
 # The test case for the behavior of RouterRpcClient connection expiration.
 # {{TestRouterAdminCLI#testRefreshMountTableCache}} can be completed more, 
like {{TestRouterMountTableCacheRefresh}} did.

Minors:
 * For more readability, can we rename {{LocalRouterMountTableRefresh}} to 
{{LocalRouterMountTableRefresher}}, {{RemoteRouterMountTableRefresh}} to 
{{RemoteRouterMountTableRefresher}}.

*MountTableRefreshService.java*
 * Line62: {{local}} update to {{Local}}.
 * Line71: Can we add comment the mapping entry ?
 * Line162: We may need to trigger 
{{router.getRouterStateManager().loadCache(true)}} before getting RouterState 
records.
 * Line221: This error msg seems too simple.
 * Line247: {{succesCount={},failureCount=}} update to 
{{succesCount={},failureCount=}}. One more suggestion: why not add metrics 
instead of counter in this class?

*MountTableRefreshThread.java*
 * Line51: It's will be better to print the exception thrown here.


was (Author: linyiqun):
Some review comments from me (reviewed based on v007 patch):

Majors:
 * Refresh API won't not make sense for all the State Store, right? For 
example, if we use local file as the State Store, Refresh operation should only 
do in local Router and no need to invoke remote Routers.
 * We need to cover more test cases :

 # The test case for the behavior of updating cache timeout.
 # The test case for the behavior of RouterRpcClient connection expiration.
 # {{TestRouterAdminCLI#testRefreshMountTableCache}} can be completed more, 
like {{TestRouterMountTableCacheRefresh}} did.

Minors:
 * For more readability, can we rename {{LocalRouterMountTableRefresh}} to 
{{LocalRouterMountTableRefresher}}, {{RemoteRouterMountTableRefresh}} to 
{{RemoteRouterMountTableRefresher}}.

*MountTableRefreshService.java*
 * Line62: {{local}} update to {{Local}}.
 * Line71: Can we add comment the mapping entry ?
 * Line91: Can we use {{NetUtils.getHostPortString}} to replace 
{{StateStoreUtils.getHostPortString}}?
 * Line162: We may need to trigger 
{{router.getRouterStateManager().loadCache(true)}} before getting RouterState 
records.
 * Line221: This error msg seems too simple.
 * Line247: {{succesCount={},failureCount=}} update to 
{{succesCount={},failureCount={}}}. One more suggestion: why not add metrics 
instead of counter in this class?

*MountTableRefreshThread.java*
 * Line51: It's will be better to print the exception thrown here.

*RouterHeartbeatService.java*
Line98:  Can we use {{NetUtils.getHostPortString}} here?


> RBF: Update mount table cache immediately after changing (add/update/remove) 
> mount table entries.
> -
>
> Key: HDFS-13443
> URL: https://issues.apache.org/jira/browse/HDFS-13443
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Mohammad Arshad
>Assignee: Mohammad Arshad
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13443-branch-2.001.patch, 
> HDFS-13443-branch-2.002.patch, HDFS-13443.001.patch, HDFS-13443.002.patch, 
> HDFS-13443.003.patch, HDFS-13443.004.patch, HDFS-13443.005.patch, 
> HDFS-13443.006.patch, HDFS-13443.007.patch
>
>
> Currently mount table cache is updated periodically, by default cache is 
> updated every minute. After change in mount table, user operations may still 
> use old mount table. This is bit wrong.
> To update mount table cache, maybe we can do following
>  * *Add refresh API in MountTableManager which will update mount table cache.*
>  * *When there is a change in mount table entries, router admin server can 
> update its cache and ask other routers to update their cache*. For example if 
> there are three routers R1,R2,R3 in a cluster then add mount table entry API, 
> at admin server side, will perform following sequence of action
>  ## user submit add mount table entry request on R1
>  ## R1 adds the mount table entry in state store
>  ## R1 call refresh API on R2
>  ## R1 calls refresh API on R3
>  ## R1 directly freshest its cache
>  ## Add mount table entry response send back to user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

--

[jira] [Commented] (HDFS-12981) HDFS renameSnapshot to Itself for Non Existent snapshot should throw error

2018-05-04 Thread Kitti Nanasi (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16463662#comment-16463662
 ] 

Kitti Nanasi commented on HDFS-12981:
-

Unit tests didn't fail, because of this patch, they executed successfully on my 
local environment.

> HDFS  renameSnapshot to Itself for Non Existent snapshot should throw error
> ---
>
> Key: HDFS-12981
> URL: https://issues.apache.org/jira/browse/HDFS-12981
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 2.6.0
>Reporter: Sailesh Patel
>Assignee: Kitti Nanasi
>Priority: Minor
> Attachments: HDFS-12981-branch-2.6.0.001.patch, 
> HDFS-12981-branch-2.6.0.002.patch, HDFS-12981.001.patch, 
> HDFS-12981.002.patch, HDFS-12981.003.patch
>
>
> When trying to rename a non-existent HDFS  snapshot to ITSELF, there are no 
> errors and exits with a success code.
> The steps to reproduce this issue is:
> hdfs dfs -mkdir /tmp/dir1
> hdfs dfsadmin -allowSnapshot /tmp/dir1
> hdfs dfs  -createSnapshot /tmp/dir1  snap1_dir
> Rename from non-existent to another_non-existent : errors and return code 1.  
> This is correct.
>   hdfs dfs -renameSnapshot /tmp/dir1 nonexist another_nonexist  : 
>   echo $?
>
>   renameSnapshot: The snapshot nonexist does not exist for directory /tmp/dir1
> Rename from non-existent to non-existent : no errors and return code 0  
> instead of Error and return code 1.
>   hdfs dfs -renameSnapshot /tmp/dir1 nonexist nonexist  ;  echo $?
> Current behavior:   No error and return code 0.
> Expected behavior:  An error returned and return code 1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13443) RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries.

2018-05-04 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16463660#comment-16463660
 ] 

Yiqun Lin commented on HDFS-13443:
--

Some review comments from me (reviewed based on v007 patch):

Majors:
 * Refresh API won't not make sense for all the State Store, right? For 
example, if we use local file as the State Store, Refresh operation should only 
do in local Router and no need to invoke remote Routers.
 * We need to cover more test cases :

 # The test case for the behavior of updating cache timeout.
 # The test case for the behavior of RouterRpcClient connection expiration.
 # {{TestRouterAdminCLI#testRefreshMountTableCache}} can be completed more, 
like {{TestRouterMountTableCacheRefresh}} did.

Minors:
 * For more readability, can we rename {{LocalRouterMountTableRefresh}} to 
{{LocalRouterMountTableRefresher}}, {{RemoteRouterMountTableRefresh}} to 
{{RemoteRouterMountTableRefresher}}.

*MountTableRefreshService.java*
 * Line62: {{local}} update to {{Local}}.
 * Line71: Can we add comment the mapping entry ?
 * Line91: Can we use {{NetUtils.getHostPortString}} to replace 
{{StateStoreUtils.getHostPortString}}?
 * Line162: We may need to trigger 
{{router.getRouterStateManager().loadCache(true)}} before getting RouterState 
records.
 * Line221: This error msg seems too simple.
 * Line247: {{succesCount={},failureCount=}} update to 
{{succesCount={},failureCount={}}}. One more suggestion: why not add metrics 
instead of counter in this class?

*MountTableRefreshThread.java*
 * Line51: It's will be better to print the exception thrown here.

*RouterHeartbeatService.java*
Line98:  Can we use {{NetUtils.getHostPortString}} here?


> RBF: Update mount table cache immediately after changing (add/update/remove) 
> mount table entries.
> -
>
> Key: HDFS-13443
> URL: https://issues.apache.org/jira/browse/HDFS-13443
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Mohammad Arshad
>Assignee: Mohammad Arshad
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13443-branch-2.001.patch, 
> HDFS-13443-branch-2.002.patch, HDFS-13443.001.patch, HDFS-13443.002.patch, 
> HDFS-13443.003.patch, HDFS-13443.004.patch, HDFS-13443.005.patch, 
> HDFS-13443.006.patch, HDFS-13443.007.patch
>
>
> Currently mount table cache is updated periodically, by default cache is 
> updated every minute. After change in mount table, user operations may still 
> use old mount table. This is bit wrong.
> To update mount table cache, maybe we can do following
>  * *Add refresh API in MountTableManager which will update mount table cache.*
>  * *When there is a change in mount table entries, router admin server can 
> update its cache and ask other routers to update their cache*. For example if 
> there are three routers R1,R2,R3 in a cluster then add mount table entry API, 
> at admin server side, will perform following sequence of action
>  ## user submit add mount table entry request on R1
>  ## R1 adds the mount table entry in state store
>  ## R1 call refresh API on R2
>  ## R1 calls refresh API on R3
>  ## R1 directly freshest its cache
>  ## Add mount table entry response send back to user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11351) HDFS throws "java.lang.IllegalStateException"

2018-05-04 Thread maobaolong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16463488#comment-16463488
 ] 

maobaolong edited comment on HDFS-11351 at 5/4/18 7:27 AM:
---

i met this exception too, the affect version is 2.7.1


{code:java}
java.lang.IllegalStateException
at com.google.common.base.Preconditions.checkState(Preconditions.java:129)
at org.apache.hadoop.ipc.Client.setCallIdAndRetryCount(Client.java:118)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:99)
at com.sun.proxy.$Proxy18.delete(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:2081)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:707)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:703)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:714)
{code}



was (Author: maobaolong):
i met this exception too,


{code:java}
java.lang.IllegalStateException
at com.google.common.base.Preconditions.checkState(Preconditions.java:129)
at org.apache.hadoop.ipc.Client.setCallIdAndRetryCount(Client.java:118)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:99)
at com.sun.proxy.$Proxy18.delete(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:2081)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:707)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:703)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:714)
{code}


> HDFS  throws "java.lang.IllegalStateException"
> --
>
> Key: HDFS-11351
> URL: https://issues.apache.org/jira/browse/HDFS-11351
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zhaofei Meng
>Priority: Major
>
> Edited the description to make it more readable.
> {noformat}
> 2017-01-16 06:00:50,236 ERROR metastore.RetryingHMSHandler 
> (RetryingHMSHandler.java:invoke(155)) - 
> MetaException(message:java.lang.IllegalStateException
>  at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newMetaException(HiveMetaStore.java:5568
>  at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1503
>  at sun.reflect.GeneratedMethodAccessor66.invoke(Unknown Source
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43
>  at java.lang.reflect.Method.invoke(Method.java:606
>  at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107
>  at com.sun.proxy.$Proxy10.create_table_with_environment_context(Unknown 
> Source
>  at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_table_with_environment_context.getResult(ThriftHiveMetastore.java:9817
>  at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_table_with_environment_context.getResult(ThriftHiveMetastore.java:9801
>  at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39
>  at 
> org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110
>  at 
> org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106
>  at java.security.AccessController.doPrivileged(Native Method
>  at javax.security.auth.Subject.doAs(Subject.java:415
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491
>  at 
> org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118
>  at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615
>  at java.lang.Thread.run(Thread.java:745) 
> Caused by: java.lang.IllegalStateException at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:129
>  at org.apache.hadoop.ipc.Client.setCallIdAndRetryCount(Client.java:116
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:99
>  at com.sun.proxy.$Proxy15.getFileInfo(Unknown Source
>  at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1701
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1124
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1120
>  at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(Fi

[jira] [Commented] (HDFS-11351) HDFS throws "java.lang.IllegalStateException"

2018-05-04 Thread maobaolong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16463488#comment-16463488
 ] 

maobaolong commented on HDFS-11351:
---

i met this exception too,


{code:java}
java.lang.IllegalStateException
at com.google.common.base.Preconditions.checkState(Preconditions.java:129)
at org.apache.hadoop.ipc.Client.setCallIdAndRetryCount(Client.java:118)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:99)
at com.sun.proxy.$Proxy18.delete(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:2081)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:707)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:703)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:714)
{code}


> HDFS  throws "java.lang.IllegalStateException"
> --
>
> Key: HDFS-11351
> URL: https://issues.apache.org/jira/browse/HDFS-11351
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zhaofei Meng
>Priority: Major
>
> Edited the description to make it more readable.
> {noformat}
> 2017-01-16 06:00:50,236 ERROR metastore.RetryingHMSHandler 
> (RetryingHMSHandler.java:invoke(155)) - 
> MetaException(message:java.lang.IllegalStateException
>  at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newMetaException(HiveMetaStore.java:5568
>  at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1503
>  at sun.reflect.GeneratedMethodAccessor66.invoke(Unknown Source
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43
>  at java.lang.reflect.Method.invoke(Method.java:606
>  at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107
>  at com.sun.proxy.$Proxy10.create_table_with_environment_context(Unknown 
> Source
>  at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_table_with_environment_context.getResult(ThriftHiveMetastore.java:9817
>  at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_table_with_environment_context.getResult(ThriftHiveMetastore.java:9801
>  at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39
>  at 
> org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110
>  at 
> org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106
>  at java.security.AccessController.doPrivileged(Native Method
>  at javax.security.auth.Subject.doAs(Subject.java:415
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491
>  at 
> org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118
>  at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615
>  at java.lang.Thread.run(Thread.java:745) 
> Caused by: java.lang.IllegalStateException at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:129
>  at org.apache.hadoop.ipc.Client.setCallIdAndRetryCount(Client.java:116
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:99
>  at com.sun.proxy.$Proxy15.getFileInfo(Unknown Source
>  at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1701
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1124
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1120
>  at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1120
>  at org.apache.hadoop.hive.metastore.Warehouse.isDir(Warehouse.java:475
>  at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1430
>  at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1489)
>  ... 18 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13527) craeteLocatedBlock IsCorrupt logic is fault when all block are corrupt.

2018-05-04 Thread maobaolong (JIRA)
maobaolong created HDFS-13527:
-

 Summary: craeteLocatedBlock IsCorrupt logic is fault when all 
block are corrupt.
 Key: HDFS-13527
 URL: https://issues.apache.org/jira/browse/HDFS-13527
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs, namenode
Affects Versions: 3.2.0
Reporter: maobaolong


the step is:

1. put a small file into hdfs FILEPATH
2. remove block replicas in all datanode blockpool.
3. restart datanode
4. restart namenode( leave safemode)
5. hdfs fsck FILEPATH -files -blocks  -locations 
6. namenode think this block is not corrupt block.


the code logic is:
{code:java}
// get block locations
NumberReplicas numReplicas = countNodes(blk);
final int numCorruptNodes = numReplicas.corruptReplicas();
final int numCorruptReplicas = corruptReplicas.numCorruptReplicas(blk);
if (numCorruptNodes != numCorruptReplicas) {
  LOG.warn("Inconsistent number of corrupt replicas for {}"
  + " blockMap has {} but corrupt replicas map has {}",
  blk, numCorruptNodes, numCorruptReplicas);
}

final int numNodes = blocksMap.numNodes(blk);
final boolean isCorrupt;
if (blk.isStriped()) {
  BlockInfoStriped sblk = (BlockInfoStriped) blk;
  isCorrupt = numCorruptReplicas != 0 &&
  numReplicas.liveReplicas() < sblk.getRealDataBlockNum();
} else {
  isCorrupt = numCorruptReplicas != 0 && numCorruptReplicas == numNodes;
}
{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org