[jira] [Commented] (HDFS-14355) Implement HDFS cache on SCM by using pure java mapped byte buffer

2019-03-30 Thread Anoop Sam John (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16806055#comment-16806055
 ] 

Anoop Sam John commented on HDFS-14355:
---

bq.“dfs.datanode.cache” is the prefix followed for all the cache related 
configs, so we would like to follow the pattern
Fine.. Ya I was also not sure whether some name pattern you were following. Ya 
am fine with the given reasoning
Thanks for addressing the comments. Looks good/

> Implement HDFS cache on SCM by using pure java mapped byte buffer
> -
>
> Key: HDFS-14355
> URL: https://issues.apache.org/jira/browse/HDFS-14355
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: caching, datanode
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
> Attachments: HDFS-14355.000.patch, HDFS-14355.001.patch, 
> HDFS-14355.002.patch, HDFS-14355.003.patch, HDFS-14355.004.patch, 
> HDFS-14355.005.patch, HDFS-14355.006.patch, HDFS-14355.007.patch, 
> HDFS-14355.008.patch, HDFS-14355.009.patch
>
>
> This task is to implement the caching to persistent memory using pure 
> {{java.nio.MappedByteBuffer}}, which could be useful in case native support 
> isn't available or convenient in some environments or platforms.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14378) Simplify the design of multiple NN and both logic of edit log roll and checkpoint

2019-03-30 Thread star (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16806045#comment-16806045
 ] 

star commented on HDFS-14378:
-

first step: make ANN rolling its own log.

last step: make ANN download a fsimage from a randomly chosen SNN. Later will 
be added.

> Simplify the design of multiple NN and both logic of edit log roll and 
> checkpoint
> -
>
> Key: HDFS-14378
> URL: https://issues.apache.org/jira/browse/HDFS-14378
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: star
>Assignee: star
>Priority: Minor
> Attachments: HDFS-14378-trunk.001.patch
>
>
>       HDFS-6440 introduced a mechanism to support more than 2 NNs. It 
> implements a first-writer-win policy to avoid duplicated fsimage downloading. 
> Variable 'isPrimaryCheckPointer' is used to hold the first-writer state, with 
> which SNN will provide fsimage for ANN next time. Then we have three roles in 
> NN cluster: ANN, one primary SNN, one or more normal SNN.
>       Since HDFS-12248, there may be more than two primary SNN shortly after 
> a exception occurred. It takes care with a scenario  that SNN will not upload 
> fsimage on IOE and Interrupted exceptions. Though it will not cause any 
> further functional issues, it is inconsistent. 
>       Futher more, edit log may be rolled more frequently than necessary with 
> multiple Standby name nodes, HDFS-14349. (I'm not so sure about this, will 
> verify by unit tests or any one could point it out.)
>       Above all, I‘m wondering if we could make it simple with following 
> changes:
>  * There are only two roles:ANN, SNN
>  * ANN will roll its edit log every DFS_HA_LOGROLL_PERIOD_KEY period.
>  * ANN will select a SNN to download checkpoint.
> SNN will just do logtail and checkpoint. Then provide a servlet for fsimage 
> downloading as normal. SNN will not try to roll edit log or send checkpoint 
> request to ANN.
> In a word, ANN will be more active. Suggestions are welcomed.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13853) RBF: RouterAdmin update cmd is overwriting the entry not updating the existing

2019-03-30 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16806044#comment-16806044
 ] 

Hadoop QA commented on HDFS-13853:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
25s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
13s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
48s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
7s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 59s{color} | {color:orange} hadoop-hdfs-project: The patch generated 2 new + 
0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 44s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 24m 
25s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}199m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-13853 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12964338/HDFS-13853-HDFS-13891-03.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 21b2d5c88cec 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Updated] (HDFS-14378) Simplify the design of multiple NN and both logic of edit log roll and checkpoint

2019-03-30 Thread star (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

star updated HDFS-14378:

Attachment: HDFS-14378-trunk.001.patch

> Simplify the design of multiple NN and both logic of edit log roll and 
> checkpoint
> -
>
> Key: HDFS-14378
> URL: https://issues.apache.org/jira/browse/HDFS-14378
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: star
>Assignee: star
>Priority: Minor
> Attachments: HDFS-14378-trunk.001.patch
>
>
>       HDFS-6440 introduced a mechanism to support more than 2 NNs. It 
> implements a first-writer-win policy to avoid duplicated fsimage downloading. 
> Variable 'isPrimaryCheckPointer' is used to hold the first-writer state, with 
> which SNN will provide fsimage for ANN next time. Then we have three roles in 
> NN cluster: ANN, one primary SNN, one or more normal SNN.
>       Since HDFS-12248, there may be more than two primary SNN shortly after 
> a exception occurred. It takes care with a scenario  that SNN will not upload 
> fsimage on IOE and Interrupted exceptions. Though it will not cause any 
> further functional issues, it is inconsistent. 
>       Futher more, edit log may be rolled more frequently than necessary with 
> multiple Standby name nodes, HDFS-14349. (I'm not so sure about this, will 
> verify by unit tests or any one could point it out.)
>       Above all, I‘m wondering if we could make it simple with following 
> changes:
>  * There are only two roles:ANN, SNN
>  * ANN will roll its edit log every DFS_HA_LOGROLL_PERIOD_KEY period.
>  * ANN will select a SNN to download checkpoint.
> SNN will just do logtail and checkpoint. Then provide a servlet for fsimage 
> downloading as normal. SNN will not try to roll edit log or send checkpoint 
> request to ANN.
> In a word, ANN will be more active. Suggestions are welcomed.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13853) RBF: RouterAdmin update cmd is overwriting the entry not updating the existing

2019-03-30 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-13853:

Attachment: HDFS-13853-HDFS-13891-03.patch

> RBF: RouterAdmin update cmd is overwriting the entry not updating the existing
> --
>
> Key: HDFS-13853
> URL: https://issues.apache.org/jira/browse/HDFS-13853
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-13853-HDFS-13891-01.patch, 
> HDFS-13853-HDFS-13891-02.patch, HDFS-13853-HDFS-13891-03.patch
>
>
> {code:java}
> // Create a new entry
> Map destMap = new LinkedHashMap<>();
> for (String ns : nss) {
>   destMap.put(ns, dest);
> }
> MountTable newEntry = MountTable.newInstance(mount, destMap);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13853) RBF: RouterAdmin update cmd is overwriting the entry not updating the existing

2019-03-30 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16805984#comment-16805984
 ] 

Íñigo Goiri commented on HDFS-13853:


Not sure about dest|destAdd|destremove. 
I think is fine to pass all the new destinations and that's it. 

> RBF: RouterAdmin update cmd is overwriting the entry not updating the existing
> --
>
> Key: HDFS-13853
> URL: https://issues.apache.org/jira/browse/HDFS-13853
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-13853-HDFS-13891-01.patch, 
> HDFS-13853-HDFS-13891-02.patch
>
>
> {code:java}
> // Create a new entry
> Map destMap = new LinkedHashMap<>();
> for (String ns : nss) {
>   destMap.put(ns, dest);
> }
> MountTable newEntry = MountTable.newInstance(mount, destMap);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14355) Implement HDFS cache on SCM by using pure java mapped byte buffer

2019-03-30 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16805975#comment-16805975
 ] 

Hadoop QA commented on HDFS-14355:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m  2s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}134m 47s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14355 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12964324/HDFS-14355.009.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 1d0e56eb3623 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / bf3b7fd |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26552/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26552/testReport/ |
| Max. process+thread count | 5240 (vs. ulimit of 1) |
| modules | C: 

[jira] [Commented] (HDFS-14355) Implement HDFS cache on SCM by using pure java mapped byte buffer

2019-03-30 Thread Feilong He (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16805938#comment-16805938
 ] 

Feilong He commented on HDFS-14355:
---

Thanks [~anoop.hbase] for your valuable suggestions. We have uploaded a new 
patch HDFS-14355.009.patch with some updates based on your suggestions.
{quote}getBlockInputStreamWithCheckingPmemCache -> Can be private method
{quote}
Yes, this method can be private and it's better to make it private. We have 
fixed this issue in the new patch.
{quote}public PmemVolumeManager getPmemVolumeManager -> Why being exposed? For 
tests? If so can this be package private? And also mark it with 
@VisibleForTesting
{quote}
Indeed, there is no need to expose this method in the current impl. It should 
be package private with @VisibleForTesting annotation. This issue has been 
fixed in the new patch.
{quote}I think the afterCache() thing is an unwanted indirection

Actually in PmemMappableBlockLoader#load, once the load is successful 
(mappableBlock != null), we can do this pmemVolumeManager work right?
{quote}
Good suggestion. As you pointed, using #afterCache() is indirect and such impl 
can easily cause bugs. In the new patch, afterCache work will be executed after 
a mapppableBlock is successfually loaded.
{quote}Call afterUncache() after delete the file
{quote}
Yes, #afterUncache should be executed after file is deleted.
{quote}public PmemVolumeManager(DNConf dnConf)
Can we only pass pmemVolumes and maxLockedPmem? That is cleaner IMO
{quote}
Another good suggestion. PmemVolumeManager actually just needs pomemVolumes and 
maxLoackedPmem for instantiation.
{quote}getVolumeByIndex -> can this be package private
{quote}
Yes, package private is enough. We have modified the access specifier in the 
new patch.
{quote}getCacheFilePath(ExtendedBlockId key) -> Better name would be 
getCachedPath(ExtendedBlockId)
{quote}
Yes, the method name getCacheFilePath is a bit ambiguous. In the new patch, 
this method is named as getCachedPath as you suggested.
{quote}dfs.datanode.cache.pmem.capacity -> Am not sure any naming convention u 
follow in HDFS. But as a user I would prefer a name 
dfs.datanode.pmem.cache.capacity. Ditto for dfs.datanode.cache.pmem.dirs
{quote}
“dfs.datanode.cache” is the prefix followed for all the cache related configs, 
so we would like to follow the pattern.

 

Thanks [~anoop.hbase] again for your comments.

> Implement HDFS cache on SCM by using pure java mapped byte buffer
> -
>
> Key: HDFS-14355
> URL: https://issues.apache.org/jira/browse/HDFS-14355
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: caching, datanode
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
> Attachments: HDFS-14355.000.patch, HDFS-14355.001.patch, 
> HDFS-14355.002.patch, HDFS-14355.003.patch, HDFS-14355.004.patch, 
> HDFS-14355.005.patch, HDFS-14355.006.patch, HDFS-14355.007.patch, 
> HDFS-14355.008.patch, HDFS-14355.009.patch
>
>
> This task is to implement the caching to persistent memory using pure 
> {{java.nio.MappedByteBuffer}}, which could be useful in case native support 
> isn't available or convenient in some environments or platforms.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1347) In OM HA getS3Secret call Should happen only leader OM

2019-03-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1347?focusedWorklogId=220909=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220909
 ]

ASF GitHub Bot logged work on HDDS-1347:


Author: ASF GitHub Bot
Created on: 30/Mar/19 18:23
Start Date: 30/Mar/19 18:23
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #670: HDDS-1347. In OM 
HA getS3Secret call Should happen only leader OM.
URL: https://github.com/apache/hadoop/pull/670#issuecomment-478274201
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 24 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 10 | Maven dependency ordering for branch |
   | +1 | mvninstall | 991 | trunk passed |
   | +1 | compile | 96 | trunk passed |
   | +1 | checkstyle | 26 | trunk passed |
   | +1 | mvnsite | 63 | trunk passed |
   | +1 | shadedclient | 714 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 101 | trunk passed |
   | +1 | javadoc | 57 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 13 | Maven dependency ordering for patch |
   | +1 | mvninstall | 65 | the patch passed |
   | +1 | compile | 92 | the patch passed |
   | +1 | cc | 92 | the patch passed |
   | +1 | javac | 92 | the patch passed |
   | -0 | checkstyle | 20 | hadoop-ozone: The patch generated 3 new + 0 
unchanged - 0 fixed = 3 total (was 0) |
   | +1 | mvnsite | 55 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 729 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 120 | the patch passed |
   | +1 | javadoc | 51 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 36 | common in the patch passed. |
   | -1 | unit | 41 | ozone-manager in the patch failed. |
   | +1 | asflicense | 25 | The patch does not generate ASF License warnings. |
   | | | 3385 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.ratis.TestOzoneManagerRatisServer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-670/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/670 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  cc  |
   | uname | Linux bc6beb534300 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ec82e4c |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-670/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-670/1/artifact/out/patch-unit-hadoop-ozone_ozone-manager.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-670/1/testReport/ |
   | Max. process+thread count | 438 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-670/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220909)
Time Spent: 20m  (was: 10m)

> In OM HA getS3Secret call Should happen only leader OM
> --
>
> Key: HDDS-1347
> URL: https://issues.apache.org/jira/browse/HDDS-1347
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In Om HA getS3Secret  should happen only leader OM.
>  
>  
> The reason is similar to 

[jira] [Updated] (HDFS-14355) Implement HDFS cache on SCM by using pure java mapped byte buffer

2019-03-30 Thread Feilong He (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feilong He updated HDFS-14355:
--
Attachment: HDFS-14355.009.patch

> Implement HDFS cache on SCM by using pure java mapped byte buffer
> -
>
> Key: HDFS-14355
> URL: https://issues.apache.org/jira/browse/HDFS-14355
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: caching, datanode
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
> Attachments: HDFS-14355.000.patch, HDFS-14355.001.patch, 
> HDFS-14355.002.patch, HDFS-14355.003.patch, HDFS-14355.004.patch, 
> HDFS-14355.005.patch, HDFS-14355.006.patch, HDFS-14355.007.patch, 
> HDFS-14355.008.patch, HDFS-14355.009.patch
>
>
> This task is to implement the caching to persistent memory using pure 
> {{java.nio.MappedByteBuffer}}, which could be useful in case native support 
> isn't available or convenient in some environments or platforms.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1288) SCM - Failing test on trunk that waits for HB report processing

2019-03-30 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16805908#comment-16805908
 ] 

Hudson commented on HDDS-1288:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16309 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16309/])
HDDS-1288. SCM - Failing test on trunk that waits for HB report (bharat: rev 
bf3b7fd732d6b4def8012994db6f9bedb25b8a9f)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/node/TestSCMNodeMetrics.java


> SCM - Failing test on trunk that waits for HB report processing
> ---
>
> Key: HDDS-1288
> URL: https://issues.apache.org/jira/browse/HDDS-1288
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1288.01.patch, HDDS-1288.02.patch
>
>
> Test failing due to dependence on Thread.sleep and expecting heartbeat being 
> processed in time.
> {code}
> Error Message
> Expected exactly one metric for name HealthyNodes expected:<1> but was:<0>
> Stacktrace
> java.lang.AssertionError: Expected exactly one metric for name HealthyNodes 
> expected:<1> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.test.MetricsAsserts.checkCaptured(MetricsAsserts.java:275)
>   at 
> org.apache.hadoop.test.MetricsAsserts.getIntGauge(MetricsAsserts.java:157)
>   at 
> org.apache.hadoop.test.MetricsAsserts.assertGauge(MetricsAsserts.java:151)
>   at 
> org.apache.hadoop.ozone.scm.node.TestSCMNodeMetrics.testNodeCountAndInfoMetricsReported(TestSCMNodeMetrics.java:147)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14327) Using FQDN instead of IP to access servers with DNS resolving

2019-03-30 Thread Fengnan Li (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16805905#comment-16805905
 ] 

Fengnan Li commented on HDFS-14327:
---

[~elgoiri] up ^^ thanks a lot!

> Using FQDN instead of IP to access servers with DNS resolving
> -
>
> Key: HDFS-14327
> URL: https://issues.apache.org/jira/browse/HDFS-14327
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
> Attachments: HDFS-14327.001.patch, HDFS-14327.002.patch
>
>
> With [HDFS-14118|https://issues.apache.org/jira/browse/HDFS-14118], clients 
> can get the IP of the servers (NN/Routers) and use the IP addresses to access 
> the machine. This will fail in secure environment as Kerberos is using the 
> domain name  (FQDN) in the principal so it won't recognize the IP addresses.
> This task is mainly adding a reverse look up on the current basis and get the 
> domain name after the IP is fetched. After that clients will still use the 
> domain name to access the servers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1288) SCM - Failing test on trunk that waits for HB report processing

2019-03-30 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16805896#comment-16805896
 ] 

Bharat Viswanadham commented on HDDS-1288:
--

+1 LGTM.

I will commit this shortly.

> SCM - Failing test on trunk that waits for HB report processing
> ---
>
> Key: HDDS-1288
> URL: https://issues.apache.org/jira/browse/HDDS-1288
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1288.01.patch, HDDS-1288.02.patch
>
>
> Test failing due to dependence on Thread.sleep and expecting heartbeat being 
> processed in time.
> {code}
> Error Message
> Expected exactly one metric for name HealthyNodes expected:<1> but was:<0>
> Stacktrace
> java.lang.AssertionError: Expected exactly one metric for name HealthyNodes 
> expected:<1> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.test.MetricsAsserts.checkCaptured(MetricsAsserts.java:275)
>   at 
> org.apache.hadoop.test.MetricsAsserts.getIntGauge(MetricsAsserts.java:157)
>   at 
> org.apache.hadoop.test.MetricsAsserts.assertGauge(MetricsAsserts.java:151)
>   at 
> org.apache.hadoop.ozone.scm.node.TestSCMNodeMetrics.testNodeCountAndInfoMetricsReported(TestSCMNodeMetrics.java:147)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1339) Implement Ratis Snapshots on OM

2019-03-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1339?focusedWorklogId=220901=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220901
 ]

ASF GitHub Bot logged work on HDDS-1339:


Author: ASF GitHub Bot
Created on: 30/Mar/19 17:36
Start Date: 30/Mar/19 17:36
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #651: 
HDDS-1339. Implement ratis snapshots on OM
URL: https://github.com/apache/hadoop/pull/651#discussion_r270634511
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerStateMachine.java
 ##
 @@ -161,7 +161,10 @@ public TransactionContext startTransaction(
   @Override
   public long takeSnapshot() throws IOException {
 LOG.info("Saving Ratis snapshot on the OM.");
-return ozoneManager.saveRatisSnapshot();
+if (ozoneManager != null) {
+  return ozoneManager.saveRatisSnapshot();
+}
+return 0;
 
 Review comment:
   Question: Here we are returning 0,(Should we return the lastAppliedIndex we 
are writing) How this will be used by Ratis?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220901)
Time Spent: 2.5h  (was: 2h 20m)

> Implement Ratis Snapshots on OM
> ---
>
> Key: HDDS-1339
> URL: https://issues.apache.org/jira/browse/HDDS-1339
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> For bootstrapping and restarting OMs, we need to implement snapshots in OM. 
> The OM state maintained by RocksDB will be checkpoint-ed on demand. Ratis 
> snapshots will only preserve the last applied log index by the State Machine 
> on disk. This index will be stored in file in the OM metadata dir.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1339) Implement Ratis Snapshots on OM

2019-03-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1339?focusedWorklogId=220902=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220902
 ]

ASF GitHub Bot logged work on HDDS-1339:


Author: ASF GitHub Bot
Created on: 30/Mar/19 17:36
Start Date: 30/Mar/19 17:36
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #651: 
HDDS-1339. Implement ratis snapshots on OM
URL: https://github.com/apache/hadoop/pull/651#discussion_r270634602
 
 

 ##
 File path: hadoop-hdds/common/src/main/resources/ozone-default.xml
 ##
 @@ -1617,7 +1617,7 @@
 
   
 ozone.om.ratis.snapshot.auto.trigger.threshold
-40L
+40
 
 Review comment:
   Question: If we have taken a snapshot for every 400k, then after that 200k 
transactions have happened, then when follower OM restart's because it knows it 
has till 400k only, so will it apply 200k transactions again?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220902)
Time Spent: 2.5h  (was: 2h 20m)

> Implement Ratis Snapshots on OM
> ---
>
> Key: HDDS-1339
> URL: https://issues.apache.org/jira/browse/HDDS-1339
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> For bootstrapping and restarting OMs, we need to implement snapshots in OM. 
> The OM state maintained by RocksDB will be checkpoint-ed on demand. Ratis 
> snapshots will only preserve the last applied log index by the State Machine 
> on disk. This index will be stored in file in the OM metadata dir.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1347) In OM HA getS3Secret call Should happen only leader OM

2019-03-30 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1347:
-
Target Version/s: 0.5.0

> In OM HA getS3Secret call Should happen only leader OM
> --
>
> Key: HDDS-1347
> URL: https://issues.apache.org/jira/browse/HDDS-1347
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In Om HA getS3Secret  should happen only leader OM.
>  
>  
> The reason is similar to initiateMultipartUpload. For more info refer 
> HDDS-1319 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1347) In OM HA getS3Secret call Should happen only leader OM

2019-03-30 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1347:
-
Status: Patch Available  (was: Open)

> In OM HA getS3Secret call Should happen only leader OM
> --
>
> Key: HDDS-1347
> URL: https://issues.apache.org/jira/browse/HDDS-1347
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In Om HA getS3Secret  should happen only leader OM.
>  
>  
> The reason is similar to initiateMultipartUpload. For more info refer 
> HDDS-1319 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1347) In OM HA getS3Secret call Should happen only leader OM

2019-03-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1347?focusedWorklogId=220900=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220900
 ]

ASF GitHub Bot logged work on HDDS-1347:


Author: ASF GitHub Bot
Created on: 30/Mar/19 17:25
Start Date: 30/Mar/19 17:25
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #670: 
HDDS-1347. In OM HA getS3Secret call Should happen only leader OM.
URL: https://github.com/apache/hadoop/pull/670
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220900)
Time Spent: 10m
Remaining Estimate: 0h

> In OM HA getS3Secret call Should happen only leader OM
> --
>
> Key: HDDS-1347
> URL: https://issues.apache.org/jira/browse/HDDS-1347
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In Om HA getS3Secret  should happen only leader OM.
>  
>  
> The reason is similar to initiateMultipartUpload. For more info refer 
> HDDS-1319 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1347) In OM HA getS3Secret call Should happen only leader OM

2019-03-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1347:
-
Labels: pull-request-available  (was: )

> In OM HA getS3Secret call Should happen only leader OM
> --
>
> Key: HDDS-1347
> URL: https://issues.apache.org/jira/browse/HDDS-1347
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> In Om HA getS3Secret  should happen only leader OM.
>  
>  
> The reason is similar to initiateMultipartUpload. For more info refer 
> HDDS-1319 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1357) ozone s3 shell command has confusing subcommands

2019-03-30 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1357:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thank You [~elek] for the contribution and [~ajaykumar] for the review.

I have committed this to trunk and ozone-0.4.

> ozone s3 shell command has confusing subcommands
> 
>
> Key: HDDS-1357
> URL: https://issues.apache.org/jira/browse/HDDS-1357
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Let's check the potential subcommands of ozone sh:
> {code}
> [hadoop@om-0 keytabs]$ ozone sh
> Incomplete command
> Usage: ozone sh [-hV] [--verbose] [-D=]... [COMMAND]
> Shell for Ozone object store
>   --verbose   More verbose output. Show the stack trace of the errors.
>   -D, --set=
>   -h, --help  Show this help message and exit.
>   -V, --version   Print version information and exit.
> Commands:
>   volume, vol  Volume specific operations
>   bucket   Bucket specific operations
>   key  Key specific operations
>   tokenToken specific operations
> {code}
> This is fine, but for ozone s3:
> {code}
> [hadoop@om-0 keytabs]$ ozone s3
> Incomplete command
> Usage: ozone s3 [-hV] [--verbose] [-D=]... [COMMAND]
> Shell for S3 specific operations
>   --verbose   More verbose output. Show the stack trace of the errors.
>   -D, --set=
>   -h, --help  Show this help message and exit.
>   -V, --version   Print version information and exit.
> Commands:
>   getsecretReturns s3 secret for current user
>   path Returns the ozone path for S3Bucket
>   volume, vol  Volume specific operations
>   bucket   Bucket specific operations
>   key  Key specific operations
>   tokenToken specific operations
> {code}
> This list should contain only the getsecret/path commands and not the 
> volume/bucket/key subcommands.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1357) ozone s3 shell command has confusing subcommands

2019-03-30 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1357:
-
Fix Version/s: 0.5.0
   0.4.0

> ozone s3 shell command has confusing subcommands
> 
>
> Key: HDDS-1357
> URL: https://issues.apache.org/jira/browse/HDDS-1357
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.0, 0.5.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Let's check the potential subcommands of ozone sh:
> {code}
> [hadoop@om-0 keytabs]$ ozone sh
> Incomplete command
> Usage: ozone sh [-hV] [--verbose] [-D=]... [COMMAND]
> Shell for Ozone object store
>   --verbose   More verbose output. Show the stack trace of the errors.
>   -D, --set=
>   -h, --help  Show this help message and exit.
>   -V, --version   Print version information and exit.
> Commands:
>   volume, vol  Volume specific operations
>   bucket   Bucket specific operations
>   key  Key specific operations
>   tokenToken specific operations
> {code}
> This is fine, but for ozone s3:
> {code}
> [hadoop@om-0 keytabs]$ ozone s3
> Incomplete command
> Usage: ozone s3 [-hV] [--verbose] [-D=]... [COMMAND]
> Shell for S3 specific operations
>   --verbose   More verbose output. Show the stack trace of the errors.
>   -D, --set=
>   -h, --help  Show this help message and exit.
>   -V, --version   Print version information and exit.
> Commands:
>   getsecretReturns s3 secret for current user
>   path Returns the ozone path for S3Bucket
>   volume, vol  Volume specific operations
>   bucket   Bucket specific operations
>   key  Key specific operations
>   tokenToken specific operations
> {code}
> This list should contain only the getsecret/path commands and not the 
> volume/bucket/key subcommands.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1357) ozone s3 shell command has confusing subcommands

2019-03-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1357?focusedWorklogId=220898=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220898
 ]

ASF GitHub Bot logged work on HDDS-1357:


Author: ASF GitHub Bot
Created on: 30/Mar/19 17:13
Start Date: 30/Mar/19 17:13
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #663: 
HDDS-1357. ozone s3 shell command has confusing subcommands
URL: https://github.com/apache/hadoop/pull/663
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220898)
Time Spent: 1h 10m  (was: 1h)

> ozone s3 shell command has confusing subcommands
> 
>
> Key: HDDS-1357
> URL: https://issues.apache.org/jira/browse/HDDS-1357
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Let's check the potential subcommands of ozone sh:
> {code}
> [hadoop@om-0 keytabs]$ ozone sh
> Incomplete command
> Usage: ozone sh [-hV] [--verbose] [-D=]... [COMMAND]
> Shell for Ozone object store
>   --verbose   More verbose output. Show the stack trace of the errors.
>   -D, --set=
>   -h, --help  Show this help message and exit.
>   -V, --version   Print version information and exit.
> Commands:
>   volume, vol  Volume specific operations
>   bucket   Bucket specific operations
>   key  Key specific operations
>   tokenToken specific operations
> {code}
> This is fine, but for ozone s3:
> {code}
> [hadoop@om-0 keytabs]$ ozone s3
> Incomplete command
> Usage: ozone s3 [-hV] [--verbose] [-D=]... [COMMAND]
> Shell for S3 specific operations
>   --verbose   More verbose output. Show the stack trace of the errors.
>   -D, --set=
>   -h, --help  Show this help message and exit.
>   -V, --version   Print version information and exit.
> Commands:
>   getsecretReturns s3 secret for current user
>   path Returns the ozone path for S3Bucket
>   volume, vol  Volume specific operations
>   bucket   Bucket specific operations
>   key  Key specific operations
>   tokenToken specific operations
> {code}
> This list should contain only the getsecret/path commands and not the 
> volume/bucket/key subcommands.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1357) ozone s3 shell command has confusing subcommands

2019-03-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1357?focusedWorklogId=220897=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220897
 ]

ASF GitHub Bot logged work on HDDS-1357:


Author: ASF GitHub Bot
Created on: 30/Mar/19 17:12
Start Date: 30/Mar/19 17:12
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #663: HDDS-1357. 
ozone s3 shell command has confusing subcommands
URL: https://github.com/apache/hadoop/pull/663#issuecomment-478266353
 
 
   +1 LGTM. I will commit this shortly.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220897)
Time Spent: 1h  (was: 50m)

> ozone s3 shell command has confusing subcommands
> 
>
> Key: HDDS-1357
> URL: https://issues.apache.org/jira/browse/HDDS-1357
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Let's check the potential subcommands of ozone sh:
> {code}
> [hadoop@om-0 keytabs]$ ozone sh
> Incomplete command
> Usage: ozone sh [-hV] [--verbose] [-D=]... [COMMAND]
> Shell for Ozone object store
>   --verbose   More verbose output. Show the stack trace of the errors.
>   -D, --set=
>   -h, --help  Show this help message and exit.
>   -V, --version   Print version information and exit.
> Commands:
>   volume, vol  Volume specific operations
>   bucket   Bucket specific operations
>   key  Key specific operations
>   tokenToken specific operations
> {code}
> This is fine, but for ozone s3:
> {code}
> [hadoop@om-0 keytabs]$ ozone s3
> Incomplete command
> Usage: ozone s3 [-hV] [--verbose] [-D=]... [COMMAND]
> Shell for S3 specific operations
>   --verbose   More verbose output. Show the stack trace of the errors.
>   -D, --set=
>   -h, --help  Show this help message and exit.
>   -V, --version   Print version information and exit.
> Commands:
>   getsecretReturns s3 secret for current user
>   path Returns the ozone path for S3Bucket
>   volume, vol  Volume specific operations
>   bucket   Bucket specific operations
>   key  Key specific operations
>   tokenToken specific operations
> {code}
> This list should contain only the getsecret/path commands and not the 
> volume/bucket/key subcommands.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14355) Implement HDFS cache on SCM by using pure java mapped byte buffer

2019-03-30 Thread Anoop Sam John (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16805884#comment-16805884
 ] 

Anoop Sam John commented on HDFS-14355:
---

getBlockInputStreamWithCheckingPmemCache  -> Can be private method

public PmemVolumeManager getPmemVolumeManager -> Why being exposed? For tests?  
If so can this be package private? And also mark it with @VisibleForTesting

I think the afterCache() thing is an unwanted indirection
{code}
FsDatasetCache
try {
411  mappableBlock = cacheLoader.load(length, blockIn, metaIn,
412  blockFileName, key);
413} catch (ChecksumException e) {
414  // Exception message is bogus since this wasn't caused by 
a file read

418  LOG.warn("Failed to cache the block [key=" + key + "]!", 
e);
419  return;
420}
421mappableBlock.afterCache();
PmemMappedBlock 
@Override
58 public void afterCache() {
59   pmemVolumeManager.afterCache(key, volumeIndex);
60 }
PmemVolumeManager 
public void afterCache(ExtendedBlockId key, Byte volumeIndex) {
299blockKeyToVolume.put(key, volumeIndex);
300  }
{code}
Actually in PmemMappableBlockLoader#load, once the load is successful 
(mappableBlock != null), we can do this pmemVolumeManager work right?

{code}
public void close() {
64   pmemVolumeManager.afterUncache(key);
...
68 FsDatasetUtil.deleteMappedFile(cacheFilePath);
{code}
Call afterUncache() after delete the file

public PmemVolumeManager(DNConf dnConf)
Can we only pass pmemVolumes and maxLockedPmem? That is cleaner IMO

getVolumeByIndex -> can this be package private

getCacheFilePath(ExtendedBlockId key) -> Better name would be 
getCachedPath(ExtendedBlockId)

dfs.datanode.cache.pmem.capacity -> Am not sure any naming convention u follow 
in HDFS. But as a user I would prefer a name dfs.datanode.pmem.cache.capacity. 
Ditto for dfs.datanode.cache.pmem.dirs


> Implement HDFS cache on SCM by using pure java mapped byte buffer
> -
>
> Key: HDFS-14355
> URL: https://issues.apache.org/jira/browse/HDFS-14355
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: caching, datanode
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
> Attachments: HDFS-14355.000.patch, HDFS-14355.001.patch, 
> HDFS-14355.002.patch, HDFS-14355.003.patch, HDFS-14355.004.patch, 
> HDFS-14355.005.patch, HDFS-14355.006.patch, HDFS-14355.007.patch, 
> HDFS-14355.008.patch
>
>
> This task is to implement the caching to persistent memory using pure 
> {{java.nio.MappedByteBuffer}}, which could be useful in case native support 
> isn't available or convenient in some environments or platforms.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14400) Namenode ExpiredHeartbeats metric

2019-03-30 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16805854#comment-16805854
 ] 

Íñigo Goiri commented on HDFS-14400:


Isn't this just a counter?
The problem would be if we double counted. 

> Namenode ExpiredHeartbeats metric
> -
>
> Key: HDFS-14400
> URL: https://issues.apache.org/jira/browse/HDFS-14400
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.2
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Minor
> Attachments: HDFS-14400-001.patch
>
>
> Noticed incorrect value in ExpiredHeartbeats metrics under namenode JMX.
> We will increment ExpiredHeartbeats count when Datanode is dead but somehow 
> we missed to decrement when datanode is alive back.
> {code}
> { "name" : "Hadoop:service=NameNode,name=FSNamesystem", "modelerType" : 
> "FSNamesystem", "tag.Context" : "dfs", "tag.TotalSyncTimes" : "7 ", 
> "tag.HAState" : "active", ... "ExpiredHeartbeats" : 2, ... }
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14316) RBF: Support unavailable subclusters for mount points with multiple destinations

2019-03-30 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16805852#comment-16805852
 ] 

Íñigo Goiri commented on HDFS-14316:


Thanks [~ayushtkn] for the review and the commit? 

> RBF: Support unavailable subclusters for mount points with multiple 
> destinations
> 
>
> Key: HDFS-14316
> URL: https://issues.apache.org/jira/browse/HDFS-14316
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-14316-HDFS-13891.000.patch, 
> HDFS-14316-HDFS-13891.001.patch, HDFS-14316-HDFS-13891.002.patch, 
> HDFS-14316-HDFS-13891.003.patch, HDFS-14316-HDFS-13891.004.patch, 
> HDFS-14316-HDFS-13891.005.patch, HDFS-14316-HDFS-13891.006.patch, 
> HDFS-14316-HDFS-13891.007.patch, HDFS-14316-HDFS-13891.008.patch, 
> HDFS-14316-HDFS-13891.009.patch, HDFS-14316-HDFS-13891.010.patch, 
> HDFS-14316-HDFS-13891.011.patch, HDFS-14316-HDFS-13891.012.patch, 
> HDFS-14316-HDFS-13891.013.patch, HDFS-14316-HDFS-13891.014.patch, 
> HDFS-14316-HDFS-13891.015.patch
>
>
> Currently mount points with multiple destinations (e.g., HASH_ALL) fail 
> writes when the destination subcluster is down. We need an option to allow 
> writing in other subclusters when one is down.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14372) NPE while DN is shutting down

2019-03-30 Thread lujie (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16805849#comment-16805849
 ] 

lujie commented on HDFS-14372:
--

ping->

i have given the patch which include the UT to reproduce it, could anybody 
review it?

thanks!

> NPE while DN is shutting down
> -
>
> Key: HDFS-14372
> URL: https://issues.apache.org/jira/browse/HDFS-14372
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lujie
>Assignee: lujie
>Priority: Major
> Attachments: HDFS-14372_0.patch, HDFS-14372_1.patch
>
>
> Take the code BPServiceActor#register:
> {code:java}
> while (shouldRun()) {
> try {
>// Use returned registration from namenode with updated fields
> newBpRegistration = bpNamenode.registerDatanode(newBpRegistration);
> newBpRegistration.setNamespaceInfo(nsInfo);
> bpRegistration = newBpRegistration;
> break;
> } catch(EOFException e) { // namenode might have just restarted
> 
> }
> LOG.info("Block pool " + this + " successfully registered with NN");
> bpos.registrationSucceeded(this, bpRegistration);
> {code}
> if DN is shutdown, then above code will skip the loop, and bpRegistration  == 
> null, the null value will be used  in DataNode#bpRegistrationSucceeded:
> {code:java}
> if(!storage.getDatanodeUuid().equals(bpRegistration.getDatanodeUuid()))
> {code}
> hence NPE happens
> {code:java}
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.bpRegistrationSucceeded(DataNode.java:1583)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.registrationSucceeded(BPOfferService.java:425)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.register(BPServiceActor.java:807)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:294)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:840)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1189) Recon Aggregate DB schema and ORM

2019-03-30 Thread Aravindan Vijayan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16805833#comment-16805833
 ] 

Aravindan Vijayan commented on HDDS-1189:
-

I think the patch looks great! Minor comments.

1. 
{code}
 ozone.recon.sql.db.jdbc.url
jdbc:sqlite:/tmp/ozone_recon_sqlite.db
{code}
Is this value just the default? It may be preferable to have the Recon SQL DB 
in the same metadata dir as container DB.

2. 
{code}
ozone.recon.sql.db.password
{code}
Can we specify this as password field? 

3. In the findbugs exclude file, why are we adding the package 
"org.hadoop.ozone.recon.schema" even after adding the child packages like 
org.hadoop.ozone.recon.schema.tables and 
org.hadoop.ozone.recon.schema.tables.pojos?


> Recon Aggregate DB schema and ORM
> -
>
> Key: HDDS-1189
> URL: https://issues.apache.org/jira/browse/HDDS-1189
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1189.01.patch, HDDS-1189.02.patch, 
> HDDS-1189.03.patch, HDDS-1189.04.patch
>
>
> _Objectives_
> - Define V1 of the db schema for recon service
> - The current proposal is to use jOOQ as the ORM for SQL interaction. For two 
> main reasons: a) powerful DSL for querying, that abstracts out SQL dialects, 
> b) Allows code to schema and schema to code seamless transition, critical for 
> creating DDL through the code and unit testing across versions of the 
> application.
> - Add e2e unit tests suite for Recon entities, created based on the design doc



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1357) ozone s3 shell command has confusing subcommands

2019-03-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1357?focusedWorklogId=220875=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220875
 ]

ASF GitHub Bot logged work on HDDS-1357:


Author: ASF GitHub Bot
Created on: 30/Mar/19 12:31
Start Date: 30/Mar/19 12:31
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #663: HDDS-1357. ozone 
s3 shell command has confusing subcommands
URL: https://github.com/apache/hadoop/pull/663#issuecomment-478241463
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 27 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 62 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1067 | trunk passed |
   | +1 | compile | 106 | trunk passed |
   | +1 | checkstyle | 34 | trunk passed |
   | +1 | mvnsite | 115 | trunk passed |
   | +1 | shadedclient | 648 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 112 | trunk passed |
   | +1 | javadoc | 94 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 16 | Maven dependency ordering for patch |
   | +1 | mvninstall | 108 | the patch passed |
   | +1 | compile | 99 | the patch passed |
   | +1 | javac | 99 | the patch passed |
   | +1 | checkstyle | 26 | the patch passed |
   | +1 | mvnsite | 89 | the patch passed |
   | +1 | shellcheck | 26 | There were no new shellcheck issues. |
   | +1 | shelldocs | 16 | The patch generated 0 new + 104 unchanged - 132 
fixed = 104 total (was 236) |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 726 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 116 | the patch passed |
   | +1 | javadoc | 73 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 36 | common in the patch passed. |
   | +1 | unit | 44 | ozone-manager in the patch passed. |
   | -1 | unit | 602 | integration-test in the patch failed. |
   | +1 | asflicense | 35 | The patch does not generate ASF License warnings. |
   | | | 4441 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-663/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/663 |
   | Optional Tests |  dupname  asflicense  mvnsite  unit  shellcheck  
shelldocs  compile  javac  javadoc  mvninstall  shadedclient  findbugs  
checkstyle  |
   | uname | Linux 8221beffdade 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d9e9e56 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | shellcheck | v0.4.6 |
   | findbugs | v3.1.0-RC1 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-663/2/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-663/2/testReport/ |
   | Max. process+thread count | 4051 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-663/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220875)
Time Spent: 50m  (was: 40m)

> ozone s3 shell command has confusing subcommands
> 
>
> Key: HDDS-1357
> URL: https://issues.apache.org/jira/browse/HDDS-1357
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  

[jira] [Commented] (HDFS-13853) RBF: RouterAdmin update cmd is overwriting the entry not updating the existing

2019-03-30 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16805750#comment-16805750
 ] 

Hadoop QA commented on HDFS-13853:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
35s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
32s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
19s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
20s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
14s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
30s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m  
8s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 12s{color} | {color:orange} hadoop-hdfs-project: The patch generated 2 new + 
0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 50s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m  8s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 23m  
7s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}205m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-13853 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12964288/HDFS-13853-HDFS-13891-02.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 013fe113028b 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / dea3798 |
| maven | version: Apache Maven 3.3.9 |

[jira] [Commented] (HDFS-13853) RBF: RouterAdmin update cmd is overwriting the entry not updating the existing

2019-03-30 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16805716#comment-16805716
 ] 

Hadoop QA commented on HDFS-13853:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
20s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
46s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
46s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
54s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
9s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 58s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 59s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}105m 53s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 25m 
55s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
53s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}200m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.datanode.TestBPOfferService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-13853 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12964270/HDFS-13853-HDFS-13891-01.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 702af9af563d 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Updated] (HDFS-13853) RBF: RouterAdmin update cmd is overwriting the entry not updating the existing

2019-03-30 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-13853:

Attachment: HDFS-13853-HDFS-13891-02.patch

> RBF: RouterAdmin update cmd is overwriting the entry not updating the existing
> --
>
> Key: HDFS-13853
> URL: https://issues.apache.org/jira/browse/HDFS-13853
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-13853-HDFS-13891-01.patch, 
> HDFS-13853-HDFS-13891-02.patch
>
>
> {code:java}
> // Create a new entry
> Map destMap = new LinkedHashMap<>();
> for (String ns : nss) {
>   destMap.put(ns, dest);
> }
> MountTable newEntry = MountTable.newInstance(mount, destMap);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13912) RBF: Add methods to RouterAdmin to set order, read only, and chown

2019-03-30 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16805667#comment-16805667
 ] 

Hadoop QA commented on HDFS-13912:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 5 new + 1 unchanged - 0 fixed = 6 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 54s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 
42s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-13912 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940009/HDFS-13912-02.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 54722398d16e 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d9e9e56 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26550/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26550/testReport/ |
| Max. process+thread count | 1364 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26550/console |
| Powered by | Apache Yetus 0.8.0