[jira] [Created] (HDFS-13582) Improve backward compatibility for HDFS-13176 (WebHdfs file path gets truncated when having semicolon (;) inside)

2018-05-17 Thread Zsolt Venczel (JIRA)
Zsolt Venczel created HDFS-13582:


 Summary: Improve backward compatibility for HDFS-13176 (WebHdfs 
file path gets truncated when having semicolon (;) inside)
 Key: HDFS-13582
 URL: https://issues.apache.org/jira/browse/HDFS-13582
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Zsolt Venczel
Assignee: Zsolt Venczel
 Fix For: 3.2.0


Encode special character only if necessary in order to improve backward 
compatibility in the following scenario:

new (having HDFS-13176) WebHdfs client - > old (not having HDFS-13176) WebHdfs 
server 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-77) Key replication factor and type should be stored per key by Ozone Manager

2018-05-17 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-77?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16479012#comment-16479012
 ] 

Nanda kumar commented on HDDS-77:
-

Thanks [~msingh] for reporting and working on this.
+1 (non-binding), the patch looks good to me.

> Key replication factor and type should be stored per key by Ozone Manager
> -
>
> Key: HDDS-77
> URL: https://issues.apache.org/jira/browse/HDDS-77
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Attachments: HDDS-77.001.patch
>
>
> Currently for a key, a client requests for multiple blocks through allocate 
> block calls. However it is possible for the allocate block call to have a 
> different replication type and factor than the blocks allocated during create 
> key.
> This jira proposes to store the replication factor and type values inside the 
> OzoneManager and re-use the values for the subsequent block allocation calls.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13576) RBF: Add destination path length validation for add/update mount entry

2018-05-17 Thread Dibyendu Karmakar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dibyendu Karmakar updated HDFS-13576:
-
Description: 
Currently there is no validation to check destination path length while adding 
or updating mount entry. But while trying to create directory using this mount 
entry 
{noformat}
RemoteException(org.apache.hadoop.hdfs.protocol.FSLimitException$PathComponentTooLongException){noformat}
is thrown with exception message as 
{noformat}
"maximum path component name limit of ... directory / is 
exceeded: limit=255 length=1817"{noformat}
 

  was:
Currently there is no validation to check destination path length while adding 
or updating mount entry. But while trying to create directory using this mount 
entry 

 
{noformat}
RemoteException(org.apache.hadoop.hdfs.protocol.FSLimitException$PathComponentTooLongException){noformat}
is thrown with exception message as 
{noformat}
"maximum path component name limit of ... directory / is 
exceeded: limit=255 length=1817"{noformat}
 


> RBF: Add destination path length validation for add/update mount entry
> --
>
> Key: HDFS-13576
> URL: https://issues.apache.org/jira/browse/HDFS-13576
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Priority: Minor
>
> Currently there is no validation to check destination path length while 
> adding or updating mount entry. But while trying to create directory using 
> this mount entry 
> {noformat}
> RemoteException(org.apache.hadoop.hdfs.protocol.FSLimitException$PathComponentTooLongException){noformat}
> is thrown with exception message as 
> {noformat}
> "maximum path component name limit of ... directory / is 
> exceeded: limit=255 length=1817"{noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13576) RBF: Add destination path length validation for add/update mount entry

2018-05-17 Thread Dibyendu Karmakar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dibyendu Karmakar updated HDFS-13576:
-
Description: 
Currently there is no validation to check destination path length while adding 
or updating mount entry. But while trying to create directory using this mount 
entry 

 
{noformat}
RemoteException(org.apache.hadoop.hdfs.protocol.FSLimitException$PathComponentTooLongException){noformat}
is thrown with exception message as 
{noformat}
"maximum path component name limit of ... directory / is 
exceeded: limit=255 length=1817"{noformat}
 

  was:Currently there is no validation to check destination path length while 
adding or updating mount entry. But while trying to create directory using this 
mount entry 
RemoteException(org.apache.hadoop.hdfs.protocol.FSLimitException$PathComponentTooLongException)
 is thrown with exception message as "maximum path component name limit of 
... directory / is exceeded: limit=255 length=1817"


> RBF: Add destination path length validation for add/update mount entry
> --
>
> Key: HDFS-13576
> URL: https://issues.apache.org/jira/browse/HDFS-13576
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Priority: Minor
>
> Currently there is no validation to check destination path length while 
> adding or updating mount entry. But while trying to create directory using 
> this mount entry 
>  
> {noformat}
> RemoteException(org.apache.hadoop.hdfs.protocol.FSLimitException$PathComponentTooLongException){noformat}
> is thrown with exception message as 
> {noformat}
> "maximum path component name limit of ... directory / is 
> exceeded: limit=255 length=1817"{noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13573) Javadoc for BlockPlacementPolicyDefault is inaccurate

2018-05-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478996#comment-16478996
 ] 

genericqa commented on HDFS-13573:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}102m 59s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}166m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.client.impl.TestBlockReaderLocal |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13573 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12923897/HDFS-13573.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 397988c2b80e 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 454de3b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 

[jira] [Resolved] (HDDS-83) Rename StorageLocationReport class to VolumeInfo

2018-05-17 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-83?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee resolved HDDS-83.
-
   Resolution: Not A Problem
Fix Version/s: 0.2.1

Resolving this as this change is not required.

> Rename StorageLocationReport class to VolumeInfo
> 
>
> Key: HDDS-83
> URL: https://issues.apache.org/jira/browse/HDDS-83
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Minor
> Fix For: 0.2.1
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-83) Rename StorageLocationReport class to VolumeInfo

2018-05-17 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-83?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478916#comment-16478916
 ] 

Nanda kumar commented on HDDS-83:
-

I feel it's better to have the name as {{StorageReport}} / {{StorageInfo}} 
rather than renaming it to {{VolumeInfo}}, as this might bring in confusion 
between ozone {{VolumeInfo}}.

> Rename StorageLocationReport class to VolumeInfo
> 
>
> Key: HDDS-83
> URL: https://issues.apache.org/jira/browse/HDDS-83
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-74) Rename name of properties related to configuration tags

2018-05-17 Thread Sandeep Nemuri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-74?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri reassigned HDDS-74:
--

Assignee: Sandeep Nemuri

> Rename name of properties related to configuration tags
> ---
>
> Key: HDDS-74
> URL: https://issues.apache.org/jira/browse/HDDS-74
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie
>
> We have two properties in {{ozone-default.xml}} for configuration tags.
> * {{hadoop.custom.tags}}
> * {{ozone.system.tags}}
> For better readability, these properties can be renamed to
> *  {{ozone.tags.custom}}
> * {{ozone.tags.system}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-76) Modify SCMStorageReportProto to include the data dir paths as well as the StorageType info

2018-05-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-76?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478893#comment-16478893
 ] 

genericqa commented on HDDS-76:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
26s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-76 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12923898/HDDS-76.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux 5a3b8c0cd19f 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 454de3b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/129/testReport/ |
| Max. process+thread count | 357 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/container-service U: hadoop-hdds/container-service |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/129/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   

[jira] [Commented] (HDFS-13581) On clicking DN UI logs link it uses http protocol for Wire encrypted cluster

2018-05-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1647#comment-1647
 ] 

genericqa commented on HDFS-13581:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
38m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13581 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12923900/HDFS-13581.000.patch |
| Optional Tests |  asflicense  shadedclient  |
| uname | Linux f02ac0e792e2 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 454de3b |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 329 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24241/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> On clicking DN UI logs link it uses http protocol for Wire encrypted cluster
> 
>
> Key: HDFS-13581
> URL: https://issues.apache.org/jira/browse/HDFS-13581
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Minor
> Attachments: HDFS-13581.000.patch
>
>
> On clicking DN UI logs link, the HTTPS uri gets redirected to http Uri and 
> fails.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13540) DFSStripedInputStream should not allocate new buffers during close / unbuffer

2018-05-17 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478871#comment-16478871
 ] 

SammiChen commented on HDFS-13540:
--

[~xiaochen], thanks for the explanation. It makes sense to change the Jira 
title as your proposal.  I double checked the code, *curStripeBuf* is only used 
in two EC read functions.

For the new test case, I would suggest,
 # change the name from testCloseDoesNotGetBuffer to  
testCloseDoesNotAllocateNewBuffer. It's more clear.
 # the test case always passes even when I use "true" in 
closeCurrentBlockReaders.  Because the *curStripeBuf* will be set to *null* 
after *stream.close* is called. So *assertNull(stream.getCurStripeBuf());* 
always stands.

The alternative to check whether buffer is allocated or not is to check the 
number of buffers holds by *ElasticByteBufferPool*. 

> DFSStripedInputStream should not allocate new buffers during close / unbuffer
> -
>
> Key: HDFS-13540
> URL: https://issues.apache.org/jira/browse/HDFS-13540
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HDFS-13540.01.patch, HDFS-13540.02.patch, 
> HDFS-13540.03.patch
>
>
> This was found in the same scenario where HDFS-13539 is caught.
> There are 2 OOM that looks interesting:
> {noformat}
> FSDataInputStream#close error:
> OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct 
> buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:694)
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
> at 
> org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:672)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.close(DFSStripedInputStream.java:181)
> at java.io.FilterInputStream.close(FilterInputStream.java:181)
> {noformat}
> and 
> {noformat}
> org/apache/hadoop/fs/FSDataInputStream#unbuffer failed: error:
> OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct 
> buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:694)
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
> at 
> org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.unbuffer(DFSInputStream.java:1782)
> at 
> org.apache.hadoop.fs.StreamCapabilitiesPolicy.unbuffer(StreamCapabilitiesPolicy.java:48)
> at 
> org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:230)
> {noformat}
> As the stack trace goes, {{resetCurStripeBuffer}} will get buffer from the 
> buffer pool. We could save the cost of doing so if it's not for a read (e.g. 
> close, unbuffer etc.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-83) Rename StorageLocationReport class to VolumeInfo

2018-05-17 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-83:
---

 Summary: Rename StorageLocationReport class to VolumeInfo
 Key: HDDS-83
 URL: https://issues.apache.org/jira/browse/HDDS-83
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-73) Add acceptance tests for Ozone Shell

2018-05-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-73?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478852#comment-16478852
 ] 

genericqa commented on HDDS-73:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
17s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
17s{color} | {color:green} acceptance-test in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-73 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12923893/HDDS-73.003.patch |
| Optional Tests |  asflicense  unit  shellcheck  shelldocs  |
| uname | Linux 413b69dd8164 4.4.0-121-generic #145-Ubuntu SMP Fri Apr 13 
13:47:23 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 454de3b |
| maven | version: Apache Maven 3.3.9 |
| shellcheck | v0.4.6 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDDS-Build/128/artifact/out/whitespace-tabs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/128/testReport/ |
| Max. process+thread count | 412 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/acceptance-test U: hadoop-ozone/acceptance-test |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/128/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add acceptance tests for Ozone Shell
> 
>
> Key: HDDS-73
> URL: https://issues.apache.org/jira/browse/HDDS-73
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-73.001.patch, HDDS-73.002.patch, HDDS-73.003.patch
>
>
> This Jira aims to add acceptance tests related to http, o3 scheme and various 
> server port combinations in shell commands.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13581) On clicking DN UI logs link it uses http protocol for Wire encrypted cluster

2018-05-17 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDFS-13581:
---
Description: On clicking DN UI logs link, the HTTPS uri gets redirected to 
http Uri and fails.  (was: On clicking DN UI logs link, it uses http protocol 
for Wire encrypted cluster.When the link's address is changed to https, it 
throws proper expected error message.)

> On clicking DN UI logs link it uses http protocol for Wire encrypted cluster
> 
>
> Key: HDFS-13581
> URL: https://issues.apache.org/jira/browse/HDFS-13581
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Minor
> Attachments: HDFS-13581.000.patch
>
>
> On clicking DN UI logs link, the HTTPS uri gets redirected to http Uri and 
> fails.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13581) On clicking DN UI logs link it uses http protocol for Wire encrypted cluster

2018-05-17 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDFS-13581:
---
Status: Patch Available  (was: Open)

> On clicking DN UI logs link it uses http protocol for Wire encrypted cluster
> 
>
> Key: HDFS-13581
> URL: https://issues.apache.org/jira/browse/HDFS-13581
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Minor
> Attachments: HDFS-13581.000.patch
>
>
> On clicking DN UI logs link, it uses http protocol for Wire encrypted 
> cluster.When the link's address is changed to https, it throws proper 
> expected error message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13581) On clicking DN UI logs link it uses http protocol for Wire encrypted cluster

2018-05-17 Thread Shashikant Banerjee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478834#comment-16478834
 ] 

Shashikant Banerjee commented on HDFS-13581:


The Datanode UI logs link actually gives a https link not terminated with 
trailing "/". So, when it tries to open up this link, this link is inaccessible 
which then switches it back with a referred link over http. Patch v0 fixes the 
issue by adding a trailing "/" in the DN UI logs link.

> On clicking DN UI logs link it uses http protocol for Wire encrypted cluster
> 
>
> Key: HDFS-13581
> URL: https://issues.apache.org/jira/browse/HDFS-13581
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Minor
> Attachments: HDFS-13581.000.patch
>
>
> On clicking DN UI logs link, it uses http protocol for Wire encrypted 
> cluster.When the link's address is changed to https, it throws proper 
> expected error message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13581) On clicking DN UI logs link it uses http protocol for Wire encrypted cluster

2018-05-17 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDFS-13581:
---
Attachment: HDFS-13581.000.patch

> On clicking DN UI logs link it uses http protocol for Wire encrypted cluster
> 
>
> Key: HDFS-13581
> URL: https://issues.apache.org/jira/browse/HDFS-13581
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Minor
> Attachments: HDFS-13581.000.patch
>
>
> On clicking DN UI logs link, it uses http protocol for Wire encrypted 
> cluster.When the link's address is changed to https, it throws proper 
> expected error message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13581) On clicking DN UI logs link it uses http protocol for Wire encrypted cluster

2018-05-17 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDFS-13581:
--

 Summary: On clicking DN UI logs link it uses http protocol for 
Wire encrypted cluster
 Key: HDFS-13581
 URL: https://issues.apache.org/jira/browse/HDFS-13581
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee


On clicking DN UI logs link, it uses http protocol for Wire encrypted 
cluster.When the link's address is changed to https, it throws proper expected 
error message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-76) Modify SCMStorageReportProto to include the data dir paths as well as the StorageType info

2018-05-17 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-76?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-76:

Attachment: HDDS-76.01.patch

> Modify SCMStorageReportProto to include the data dir paths as well as the 
> StorageType info
> --
>
> Key: HDDS-76
> URL: https://issues.apache.org/jira/browse/HDDS-76
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-76.00.patch, HDDS-76.01.patch
>
>
> Currently, SCMStorageReport contains the storageUUID which are sent across to 
> SCM for maintaining storage Report info. This Jira aims to include the data 
> dir paths for actual disks as well as the storage Type info for each volume 
> on datanode to be sent to SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-76) Modify SCMStorageReportProto to include the data dir paths as well as the StorageType info

2018-05-17 Thread Shashikant Banerjee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-76?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478815#comment-16478815
 ] 

Shashikant Banerjee commented on HDDS-76:
-

Thanks [~msingh], for the review. Patch v1 addresses your review comments. 
Please have a look.

> Modify SCMStorageReportProto to include the data dir paths as well as the 
> StorageType info
> --
>
> Key: HDDS-76
> URL: https://issues.apache.org/jira/browse/HDDS-76
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-76.00.patch, HDDS-76.01.patch
>
>
> Currently, SCMStorageReport contains the storageUUID which are sent across to 
> SCM for maintaining storage Report info. This Jira aims to include the data 
> dir paths for actual disks as well as the storage Type info for each volume 
> on datanode to be sent to SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13573) Javadoc for BlockPlacementPolicyDefault is inaccurate

2018-05-17 Thread Zsolt Venczel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zsolt Venczel updated HDFS-13573:
-
Attachment: HDFS-13573.02.patch

> Javadoc for BlockPlacementPolicyDefault is inaccurate
> -
>
> Key: HDFS-13573
> URL: https://issues.apache.org/jira/browse/HDFS-13573
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Yiqun Lin
>Assignee: Zsolt Venczel
>Priority: Trivial
> Attachments: HDFS-13573.01.patch, HDFS-13573.02.patch
>
>
> Current rule of default block placement policy:
> {quote}The replica placement strategy is that if the writer is on a datanode,
>  the 1st replica is placed on the local machine,
>  otherwise a random datanode. The 2nd replica is placed on a datanode
>  that is on a different rack. The 3rd replica is placed on a datanode
>  which is on a different node of the rack as the second replica.
> {quote}
> *if the writer is on a datanode, the 1st replica is placed on the local 
> machine*, actually this can be decided by the hdfs client. The client can 
> pass {{CreateFlag#NO_LOCAL_WRITE}} that request to not put a block replica on 
> the local datanode. But subsequent replicas will still follow default block 
> placement policy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13573) Javadoc for BlockPlacementPolicyDefault is inaccurate

2018-05-17 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478806#comment-16478806
 ] 

Yiqun Lin commented on HDFS-13573:
--

{quote}
I would suggest not to leave out the scenario when the writer is not on a 
datanode and have the following:
...
{quote}
Change looks good to me. [~zvenczel], feel free to attach the updated patch.

> Javadoc for BlockPlacementPolicyDefault is inaccurate
> -
>
> Key: HDFS-13573
> URL: https://issues.apache.org/jira/browse/HDFS-13573
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Yiqun Lin
>Assignee: Zsolt Venczel
>Priority: Trivial
> Attachments: HDFS-13573.01.patch
>
>
> Current rule of default block placement policy:
> {quote}The replica placement strategy is that if the writer is on a datanode,
>  the 1st replica is placed on the local machine,
>  otherwise a random datanode. The 2nd replica is placed on a datanode
>  that is on a different rack. The 3rd replica is placed on a datanode
>  which is on a different node of the rack as the second replica.
> {quote}
> *if the writer is on a datanode, the 1st replica is placed on the local 
> machine*, actually this can be decided by the hdfs client. The client can 
> pass {{CreateFlag#NO_LOCAL_WRITE}} that request to not put a block replica on 
> the local datanode. But subsequent replicas will still follow default block 
> placement policy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13573) Javadoc for BlockPlacementPolicyDefault is inaccurate

2018-05-17 Thread Zsolt Venczel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478801#comment-16478801
 ] 

Zsolt Venczel commented on HDFS-13573:
--

Hi [~linyiqun]!
 Thanks for the suggestions!

One question about the sentence you mentioned:
{code:java}
* The replica placement strategy is that if the writer is on a datanode,
* the 1st replica is placed on the local machine otherwise a random datanode
* (By passing the {@link org.apache.hadoop.fs.CreateFlag}#NO_LOCAL_WRITE flag
* the client can request not to put a block replica on the local datanode.
{code}
If I understand you correctly you're suggesting the above to be changed to the 
following:
{code:java}
* The replica placement strategy is that if the writer is on a datanode,
* the 1st replica is placed on the local machine by default.
* (By passing the {@link org.apache.hadoop.fs.CreateFlag}#NO_LOCAL_WRITE flag
* the client can request not to put a block replica on the local datanode.
{code}
I would suggest not to leave out the scenario when the writer is not on a 
datanode and have the following:
{code:java}
* The replica placement strategy is that if the writer is on a datanode,
* the 1st replica is placed on the local machine by default
* (By passing the {@link org.apache.hadoop.fs.CreateFlag#NO_LOCAL_WRITE} flag
* the client can request not to put a block replica on the local datanode.
* Subsequent replicas will still follow default block placement policy.).
* If the writer is not on a datanode, the 1st replica is placed on a random 
node.{code}
What do you think?

Best regards,
Zsolt

> Javadoc for BlockPlacementPolicyDefault is inaccurate
> -
>
> Key: HDFS-13573
> URL: https://issues.apache.org/jira/browse/HDFS-13573
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Yiqun Lin
>Assignee: Zsolt Venczel
>Priority: Trivial
> Attachments: HDFS-13573.01.patch
>
>
> Current rule of default block placement policy:
> {quote}The replica placement strategy is that if the writer is on a datanode,
>  the 1st replica is placed on the local machine,
>  otherwise a random datanode. The 2nd replica is placed on a datanode
>  that is on a different rack. The 3rd replica is placed on a datanode
>  which is on a different node of the rack as the second replica.
> {quote}
> *if the writer is on a datanode, the 1st replica is placed on the local 
> machine*, actually this can be decided by the hdfs client. The client can 
> pass {{CreateFlag#NO_LOCAL_WRITE}} that request to not put a block replica on 
> the local datanode. But subsequent replicas will still follow default block 
> placement policy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-73) Add acceptance tests for Ozone Shell

2018-05-17 Thread Lokesh Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-73?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478800#comment-16478800
 ] 

Lokesh Jain commented on HDDS-73:
-

v3 patch handles whitespace issues.

> Add acceptance tests for Ozone Shell
> 
>
> Key: HDDS-73
> URL: https://issues.apache.org/jira/browse/HDDS-73
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-73.001.patch, HDDS-73.002.patch, HDDS-73.003.patch
>
>
> This Jira aims to add acceptance tests related to http, o3 scheme and various 
> server port combinations in shell commands.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-73) Add acceptance tests for Ozone Shell

2018-05-17 Thread Lokesh Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-73?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-73:

Attachment: HDDS-73.003.patch

> Add acceptance tests for Ozone Shell
> 
>
> Key: HDDS-73
> URL: https://issues.apache.org/jira/browse/HDDS-73
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-73.001.patch, HDDS-73.002.patch, HDDS-73.003.patch
>
>
> This Jira aims to add acceptance tests related to http, o3 scheme and various 
> server port combinations in shell commands.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13580) FailOnTimeout error in TestDataNodeVolumeFailure$testVolumeFailure

2018-05-17 Thread Ewan Higgs (JIRA)
Ewan Higgs created HDFS-13580:
-

 Summary: FailOnTimeout error in 
TestDataNodeVolumeFailure$testVolumeFailure
 Key: HDFS-13580
 URL: https://issues.apache.org/jira/browse/HDFS-13580
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ewan Higgs


testVolumeFailure is flaky. If we run it 50 times, it will fail about twice 
with the following backtrace:

 
{code:java}
java.lang.Exception: test timed out after 12 milliseconds

    at java.lang.Object.wait(Native Method)
    at java.lang.Thread.join(Thread.java:1253)
    at 
org.junit.internal.runners.statements.FailOnTimeout.evaluateStatement(FailOnTimeout.java:26)
    at 
org.junit.internal.runners.statements.FailOnTimeout.evaluate(FailOnTimeout.java:17)
    at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
    at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){code}
The second error (immediately after) is probably due to an issue with cleaning 
up a timed out test:
{code:java}
java.io.IOException: Cannot remove data directory: 
/Users/ehiggs/src/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/datapath
 
'/Users/ehiggs/src/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data':
 
   
absolute:/Users/ehiggs/src/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data
   permissions: drwx
path 
'/Users/ehiggs/src/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs':
 
   
absolute:/Users/ehiggs/src/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs
   permissions: drwx
path 
'/Users/ehiggs/src/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data': 
   
absolute:/Users/ehiggs/src/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data
   permissions: drwx
path '/Users/ehiggs/src/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test': 
   absolute:/Users/ehiggs/src/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test
   permissions: drwx
path '/Users/ehiggs/src/hadoop/hadoop-hdfs-project/hadoop-hdfs/target': 
   absolute:/Users/ehiggs/src/hadoop/hadoop-hdfs-project/hadoop-hdfs/target
   permissions: drwx
path '/Users/ehiggs/src/hadoop/hadoop-hdfs-project/hadoop-hdfs': 
   absolute:/Users/ehiggs/src/hadoop/hadoop-hdfs-project/hadoop-hdfs
   permissions: drwx
path '/Users/ehiggs/src/hadoop/hadoop-hdfs-project': 
   absolute:/Users/ehiggs/src/hadoop/hadoop-hdfs-project
   permissions: drwx
path '/Users/ehiggs/src/hadoop': 
   absolute:/Users/ehiggs/src/hadoop
   permissions: drwx
path '/Users/ehiggs/src': 
   absolute:/Users/ehiggs/src
   permissions: drwx
path '/Users/ehiggs': 
   absolute:/Users/ehiggs
   permissions: drwx
path '/Users': 
   absolute:/Users
   permissions: dr-x
path '/': 
   absolute:/
   permissions: dr-x


   at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:896)
   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:517)
   at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:476)
   at 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure.setUp(TestDataNodeVolumeFailure.java:125)
   at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source)
   at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:498)
   at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
   at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
   at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
   at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
   at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13480) RBF: Separate namenodeHeartbeat and routerHeartbeat to different config key

2018-05-17 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478780#comment-16478780
 ] 

Yiqun Lin commented on HDFS-13480:
--

The latest patch looks good to me but only some checkstyle issues needed to fix.
 Since now the community has some discussions for the RBF phase 2 work, let's 
hold off the commit for this Jira until we reach the agreement.

> RBF: Separate namenodeHeartbeat and routerHeartbeat to different config key
> ---
>
> Key: HDFS-13480
> URL: https://issues.apache.org/jira/browse/HDFS-13480
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
> Attachments: HDFS-13480.001.patch, HDFS-13480.002.patch, 
> HDFS-13480.002.patch, HDFS-13480.003.patch
>
>
> Now, if i enable the heartbeat.enable, but i do not want to monitor any 
> namenode, i get an ERROR log like:
> {code:java}
> [2018-04-19T14:00:03.057+08:00] [ERROR] 
> federation.router.Router.serviceInit(Router.java 214) [main] : Heartbeat is 
> enabled but there are no namenodes to monitor
> {code}
> and if i disable the heartbeat.enable, we cannot get any mounttable update, 
> because the following logic in Router.java:
> {code:java}
> if (conf.getBoolean(
> RBFConfigKeys.DFS_ROUTER_HEARTBEAT_ENABLE,
> RBFConfigKeys.DFS_ROUTER_HEARTBEAT_ENABLE_DEFAULT)) {
>   // Create status updater for each monitored Namenode
>   this.namenodeHeartbeatServices = createNamenodeHeartbeatServices();
>   for (NamenodeHeartbeatService hearbeatService :
>   this.namenodeHeartbeatServices) {
> addService(hearbeatService);
>   }
>   if (this.namenodeHeartbeatServices.isEmpty()) {
> LOG.error("Heartbeat is enabled but there are no namenodes to 
> monitor");
>   }
>   // Periodically update the router state
>   this.routerHeartbeatService = new RouterHeartbeatService(this);
>   addService(this.routerHeartbeatService);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13579) Out of memory when running TestDFSStripedOutputStreamWithFailure testCloseWithExceptionsInStreamer

2018-05-17 Thread Ewan Higgs (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478753#comment-16478753
 ] 

Ewan Higgs commented on HDFS-13579:
---

ad1b988a828608b12cafb6382436cd17f95bfcc5 (HDFS-11600) might be a more likely 
candidate.

> Out of memory when running TestDFSStripedOutputStreamWithFailure 
> testCloseWithExceptionsInStreamer
> --
>
> Key: HDFS-13579
> URL: https://issues.apache.org/jira/browse/HDFS-13579
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ewan Higgs
>Priority: Major
>
> When running  TestDFSStripedOutputStreamWithFailure 
> testCloseWithExceptionsInStreamer we often get OOM errors. It's not every 
> time, but it occurs frequently. We have reproduced this on a few different 
> machines. This seems to have been introduced in 
> f83716b7f2e5b63e4c2302c374982755233d4dd6 by HDFS-13251.
> Output from the test:
> {code:java}
> java.lang.OutOfMemoryError: unable to create new native thread
>     at java.lang.Thread.start0(Native Method)
>     at java.lang.Thread.start(Thread.java:714)
>     at 
> io.netty.util.concurrent.SingleThreadEventExecutor.shutdownGracefully(SingleThreadEventExecutor.java:578)
>     at 
> io.netty.util.concurrent.MultithreadEventExecutorGroup.shutdownGracefully(MultithreadEventExecutorGroup.java:146)
>     at 
> io.netty.util.concurrent.AbstractEventExecutorGroup.shutdownGracefully(AbstractEventExecutorGroup.java:69)
>     at 
> org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.close(DatanodeHttpServer.java:270)
>     at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:2023)
>     at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shutdownDataNode(MiniDFSCluster.java:2023)
>     at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shutdownDataNodes(MiniDFSCluster.java:2013)
>     at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1992)
>     at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1966)
>     at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1959)
>     at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailureBase.tearDown(TestDFSStripedOutputStreamWithFailureBase.java:222)
>     at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.testCloseWithExceptionsInStreamer(TestDFSStripedOutputStreamWithFailure.java:266)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>     at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>     at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
>     at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>     at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:54)
>     at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
>     at 
> com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13579) Out of memory when running TestDFSStripedOutputStreamWithFailure testCloseWithExceptionsInStreamer

2018-05-17 Thread Ewan Higgs (JIRA)
Ewan Higgs created HDFS-13579:
-

 Summary: Out of memory when running 
TestDFSStripedOutputStreamWithFailure testCloseWithExceptionsInStreamer
 Key: HDFS-13579
 URL: https://issues.apache.org/jira/browse/HDFS-13579
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ewan Higgs


When running  TestDFSStripedOutputStreamWithFailure 
testCloseWithExceptionsInStreamer we often get OOM errors. It's not every time, 
but it occurs frequently. We have reproduced this on a few different machines. 
This seems to have been introduced in f83716b7f2e5b63e4c2302c374982755233d4dd6 
by HDFS-13251.

Output from the test:
{code:java}
java.lang.OutOfMemoryError: unable to create new native thread

    at java.lang.Thread.start0(Native Method)
    at java.lang.Thread.start(Thread.java:714)
    at 
io.netty.util.concurrent.SingleThreadEventExecutor.shutdownGracefully(SingleThreadEventExecutor.java:578)
    at 
io.netty.util.concurrent.MultithreadEventExecutorGroup.shutdownGracefully(MultithreadEventExecutorGroup.java:146)
    at 
io.netty.util.concurrent.AbstractEventExecutorGroup.shutdownGracefully(AbstractEventExecutorGroup.java:69)
    at 
org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.close(DatanodeHttpServer.java:270)
    at 
org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:2023)
    at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdownDataNode(MiniDFSCluster.java:2023)
    at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdownDataNodes(MiniDFSCluster.java:2013)
    at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1992)
    at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1966)
    at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1959)
    at 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailureBase.tearDown(TestDFSStripedOutputStreamWithFailureBase.java:222)
    at 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.testCloseWithExceptionsInStreamer(TestDFSStripedOutputStreamWithFailure.java:266)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
    at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
    at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
    at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
    at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
    at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
    at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
    at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
    at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
    at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
    at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
    at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
    at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
    at 
com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:54)
    at 
com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
    at 
com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-82) Merge ContainerData and ContainerStatus classes

2018-05-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-82?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478725#comment-16478725
 ] 

genericqa commented on HDDS-82:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
54s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 46s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
48s{color} | {color:red} hadoop-hdds/container-service generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
29s{color} | {color:red} hadoop-hdds_container-service generated 1 new + 1 
unchanged - 0 fixed = 2 total (was 1) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} container-service in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m 14s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}140m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdds/container-service |
|  |  Null passed for non-null parameter of 
java.util.concurrent.ConcurrentSkipListMap.put(Object, Object) in 
org.apache.hadoop.ozone.container.common.impl.ContainerManagerImpl.readContainerInfo(String)
  At ContainerManagerImpl.java:of 
java.util.concurrent.ConcurrentSkipListMap.put(Object, Object) in 

[jira] [Comment Edited] (HDFS-13245) RBF: State store DBMS implementation

2018-05-17 Thread Yiran Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478719#comment-16478719
 ] 

Yiran Wu edited comment on HDFS-13245 at 5/17/18 8:20 AM:
--

Hi, [~giovanni.fumarola] . Thanks for code review.
 The h2database used by the unit test, I use it to test MySQL and MSSQLServer. 
It can emulate MySQL and MSSQLServer in memory mode.

Mainly used in the following two unit tests, use special connection strings to 
simulate different databases.
 *TestStateStoreMSSQLServer.java*
{code:java}
// TestStateStoreMSSQLServer.class  
 String driver = "org.h2.jdbcx.JdbcDataSource";
 String databaseUrl
 = 
"jdbc:h2:mem:db1;MODE=MSSQLServer;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE";
{code}
*TestStateStoreMySQL.java*
{code:java}
//TestStateStoreMySQL.class 
 String driver = "org.h2.jdbcx.JdbcDataSource";
 String databaseUrl =
"jdbc:h2:mem:db1;MODE=MySQL;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE";
{code}


was (Author: yiran):
Hi, [~giovanni.fumarola] . Thanks for code review.
 The h2database used by the unit test, I use it to test MySQL and MSSQLServer. 
It can emulate MySQL and MSSQLServer in memory mode.

Mainly used in the following two unit tests, use special connection strings to 
simulate different databases.
 *TestStateStoreMSSQLServer.java*
{code:java}
// TestStateStoreMSSQLServer.class  
 String driver = "org.h2.jdbcx.JdbcDataSource";
 String databaseUrl
 = "jdbc:h2:mem:db1;MODE=MSSQLServer;"
+ "DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE";
{code}
*TestStateStoreMySQL.java*
{code:java}
//TestStateStoreMySQL.class 
 String driver = "org.h2.jdbcx.JdbcDataSource";
 String databaseUrl =
"jdbc:h2:mem:db1;MODE=MySQL;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE";
{code}

> RBF: State store DBMS implementation
> 
>
> Key: HDFS-13245
> URL: https://issues.apache.org/jira/browse/HDFS-13245
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: maobaolong
>Assignee: Yiran Wu
>Priority: Major
> Attachments: HDFS-13245.001.patch, HDFS-13245.002.patch, 
> HDFS-13245.003.patch, HDFS-13245.004.patch, HDFS-13245.005.patch, 
> HDFS-13245.006.patch, HDFS-13245.007.patch, HDFS-13245.008.patch, 
> HDFS-13245.009.patch, HDFS-13245.010.patch, HDFS-13245.011.patch, 
> HDFS-13245.012.patch
>
>
> Add a DBMS implementation for the State Store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13245) RBF: State store DBMS implementation

2018-05-17 Thread Yiran Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiran Wu updated HDFS-13245:

Status: Patch Available  (was: Open)

> RBF: State store DBMS implementation
> 
>
> Key: HDFS-13245
> URL: https://issues.apache.org/jira/browse/HDFS-13245
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: maobaolong
>Assignee: Yiran Wu
>Priority: Major
> Attachments: HDFS-13245.001.patch, HDFS-13245.002.patch, 
> HDFS-13245.003.patch, HDFS-13245.004.patch, HDFS-13245.005.patch, 
> HDFS-13245.006.patch, HDFS-13245.007.patch, HDFS-13245.008.patch, 
> HDFS-13245.009.patch, HDFS-13245.010.patch, HDFS-13245.011.patch, 
> HDFS-13245.012.patch
>
>
> Add a DBMS implementation for the State Store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13245) RBF: State store DBMS implementation

2018-05-17 Thread Yiran Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478719#comment-16478719
 ] 

Yiran Wu commented on HDFS-13245:
-

Hi, [~giovanni.fumarola] . Thanks for code review.
 The h2database used by the unit test, I use it to test MySQL and MSSQLServer. 
It can emulate MySQL and MSSQLServer in memory mode.

Mainly used in the following two unit tests, use special connection strings to 
simulate different databases.
 *TestStateStoreMSSQLServer.java*
{code:java}
// TestStateStoreMSSQLServer.class  
 String driver = "org.h2.jdbcx.JdbcDataSource";
 String databaseUrl
 = "jdbc:h2:mem:db1;MODE=MSSQLServer;"
+ "DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE";
{code}
*TestStateStoreMySQL.java*
{code:java}
//TestStateStoreMySQL.class 
 String driver = "org.h2.jdbcx.JdbcDataSource";
 String databaseUrl =
"jdbc:h2:mem:db1;MODE=MySQL;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE";
{code}

> RBF: State store DBMS implementation
> 
>
> Key: HDFS-13245
> URL: https://issues.apache.org/jira/browse/HDFS-13245
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: maobaolong
>Assignee: Yiran Wu
>Priority: Major
> Attachments: HDFS-13245.001.patch, HDFS-13245.002.patch, 
> HDFS-13245.003.patch, HDFS-13245.004.patch, HDFS-13245.005.patch, 
> HDFS-13245.006.patch, HDFS-13245.007.patch, HDFS-13245.008.patch, 
> HDFS-13245.009.patch, HDFS-13245.010.patch, HDFS-13245.011.patch, 
> HDFS-13245.012.patch
>
>
> Add a DBMS implementation for the State Store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13245) RBF: State store DBMS implementation

2018-05-17 Thread Yiran Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiran Wu updated HDFS-13245:

Status: Open  (was: Patch Available)

> RBF: State store DBMS implementation
> 
>
> Key: HDFS-13245
> URL: https://issues.apache.org/jira/browse/HDFS-13245
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: maobaolong
>Assignee: Yiran Wu
>Priority: Major
> Attachments: HDFS-13245.001.patch, HDFS-13245.002.patch, 
> HDFS-13245.003.patch, HDFS-13245.004.patch, HDFS-13245.005.patch, 
> HDFS-13245.006.patch, HDFS-13245.007.patch, HDFS-13245.008.patch, 
> HDFS-13245.009.patch, HDFS-13245.010.patch, HDFS-13245.011.patch, 
> HDFS-13245.012.patch
>
>
> Add a DBMS implementation for the State Store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13480) RBF: Separate namenodeHeartbeat and routerHeartbeat to different config key

2018-05-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478680#comment-16478680
 ] 

genericqa commented on HDFS-13480:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 
53s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13480 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12923861/HDFS-13480.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux ec22a77c51ea 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 454de3b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24239/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24239/testReport/ |
| Max. process+thread count | 961 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 

[jira] [Commented] (HDDS-76) Modify SCMStorageReportProto to include the data dir paths as well as the StorageType info

2018-05-17 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-76?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478650#comment-16478650
 ] 

Mukul Kumar Singh commented on HDDS-76:
---

Thanks for the patch [~shashikant]. The patch looks really good to me. Please 
find my comments as following.

1) ContainerLocationManager.java: 131, we can remove the TODO now I think :)
2) ScmContainerDatanodeProtocol.proto:152, we should add a field to signify 
that storage report has failed.
3) StorageLocationReport.java:22, unused import
4) I was also thinking if a toProtobuf and getFromProtobuf functions should be 
added in StorageLocationReport.java.

> Modify SCMStorageReportProto to include the data dir paths as well as the 
> StorageType info
> --
>
> Key: HDDS-76
> URL: https://issues.apache.org/jira/browse/HDDS-76
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-76.00.patch
>
>
> Currently, SCMStorageReport contains the storageUUID which are sent across to 
> SCM for maintaining storage Report info. This Jira aims to include the data 
> dir paths for actual disks as well as the storage Type info for each volume 
> on datanode to be sent to SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-71) Send ContainerType to Datanode during container creation

2018-05-17 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-71?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478643#comment-16478643
 ] 

Mukul Kumar Singh commented on HDDS-71:
---

Thanks for working on this [~bharatviswa]. The patch looks good to me. I was 
thinking if we should also declare the DBType as an enum as well in the proto 
file ?

> Send ContainerType to Datanode during container creation
> 
>
> Key: HDDS-71
> URL: https://issues.apache.org/jira/browse/HDDS-71
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-71.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13573) Javadoc for BlockPlacementPolicyDefault is inaccurate

2018-05-17 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478616#comment-16478616
 ] 

Yiqun Lin commented on HDFS-13573:
--

Thanks [~zvenczel] for working this and providing the patch.
{quote}the 1st replica is placed on the local machine otherwise a random 
datanode
{quote}
Not on the local node not means it must be in a random node. This is not 
absolutely correct. I make a minor change based on your change, you can update 
this like:
{noformat}
 * the 1st replica is placed on the local machine by default.
 * (By passing the {@link org.apache.hadoop.fs.CreateFlag#NO_LOCAL_WRITE} flag
 * the client can request not to put a block replica on the local datanode.
 * Subsequent replicas will still follow default block placement policy.).
{noformat}
Also correct \{@link org.apache.hadoop.fs.CreateFlag}#NO_LOCAL_WRITE to

{@link org.apache.hadoop.fs.CreateFlag#NO_LOCAL_WRITE}

> Javadoc for BlockPlacementPolicyDefault is inaccurate
> -
>
> Key: HDFS-13573
> URL: https://issues.apache.org/jira/browse/HDFS-13573
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Yiqun Lin
>Assignee: Zsolt Venczel
>Priority: Trivial
> Attachments: HDFS-13573.01.patch
>
>
> Current rule of default block placement policy:
> {quote}The replica placement strategy is that if the writer is on a datanode,
>  the 1st replica is placed on the local machine,
>  otherwise a random datanode. The 2nd replica is placed on a datanode
>  that is on a different rack. The 3rd replica is placed on a datanode
>  which is on a different node of the rack as the second replica.
> {quote}
> *if the writer is on a datanode, the 1st replica is placed on the local 
> machine*, actually this can be decided by the hdfs client. The client can 
> pass {{CreateFlag#NO_LOCAL_WRITE}} that request to not put a block replica on 
> the local datanode. But subsequent replicas will still follow default block 
> placement policy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13480) RBF: Separate namenodeHeartbeat and routerHeartbeat to different config key

2018-05-17 Thread maobaolong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478607#comment-16478607
 ] 

maobaolong commented on HDFS-13480:
---

[~linyiqun] I've formatted my code-style, add the java doc of 
assertRouterHeartbeater, improved the documents. PTAL. Thank you for your 
comments. 

> RBF: Separate namenodeHeartbeat and routerHeartbeat to different config key
> ---
>
> Key: HDFS-13480
> URL: https://issues.apache.org/jira/browse/HDFS-13480
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
> Attachments: HDFS-13480.001.patch, HDFS-13480.002.patch, 
> HDFS-13480.002.patch, HDFS-13480.003.patch
>
>
> Now, if i enable the heartbeat.enable, but i do not want to monitor any 
> namenode, i get an ERROR log like:
> {code:java}
> [2018-04-19T14:00:03.057+08:00] [ERROR] 
> federation.router.Router.serviceInit(Router.java 214) [main] : Heartbeat is 
> enabled but there are no namenodes to monitor
> {code}
> and if i disable the heartbeat.enable, we cannot get any mounttable update, 
> because the following logic in Router.java:
> {code:java}
> if (conf.getBoolean(
> RBFConfigKeys.DFS_ROUTER_HEARTBEAT_ENABLE,
> RBFConfigKeys.DFS_ROUTER_HEARTBEAT_ENABLE_DEFAULT)) {
>   // Create status updater for each monitored Namenode
>   this.namenodeHeartbeatServices = createNamenodeHeartbeatServices();
>   for (NamenodeHeartbeatService hearbeatService :
>   this.namenodeHeartbeatServices) {
> addService(hearbeatService);
>   }
>   if (this.namenodeHeartbeatServices.isEmpty()) {
> LOG.error("Heartbeat is enabled but there are no namenodes to 
> monitor");
>   }
>   // Periodically update the router state
>   this.routerHeartbeatService = new RouterHeartbeatService(this);
>   addService(this.routerHeartbeatService);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13480) RBF: Separate namenodeHeartbeat and routerHeartbeat to different config key

2018-05-17 Thread maobaolong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

maobaolong updated HDFS-13480:
--
Attachment: HDFS-13480.003.patch

> RBF: Separate namenodeHeartbeat and routerHeartbeat to different config key
> ---
>
> Key: HDFS-13480
> URL: https://issues.apache.org/jira/browse/HDFS-13480
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
> Attachments: HDFS-13480.001.patch, HDFS-13480.002.patch, 
> HDFS-13480.002.patch, HDFS-13480.003.patch
>
>
> Now, if i enable the heartbeat.enable, but i do not want to monitor any 
> namenode, i get an ERROR log like:
> {code:java}
> [2018-04-19T14:00:03.057+08:00] [ERROR] 
> federation.router.Router.serviceInit(Router.java 214) [main] : Heartbeat is 
> enabled but there are no namenodes to monitor
> {code}
> and if i disable the heartbeat.enable, we cannot get any mounttable update, 
> because the following logic in Router.java:
> {code:java}
> if (conf.getBoolean(
> RBFConfigKeys.DFS_ROUTER_HEARTBEAT_ENABLE,
> RBFConfigKeys.DFS_ROUTER_HEARTBEAT_ENABLE_DEFAULT)) {
>   // Create status updater for each monitored Namenode
>   this.namenodeHeartbeatServices = createNamenodeHeartbeatServices();
>   for (NamenodeHeartbeatService hearbeatService :
>   this.namenodeHeartbeatServices) {
> addService(hearbeatService);
>   }
>   if (this.namenodeHeartbeatServices.isEmpty()) {
> LOG.error("Heartbeat is enabled but there are no namenodes to 
> monitor");
>   }
>   // Periodically update the router state
>   this.routerHeartbeatService = new RouterHeartbeatService(this);
>   addService(this.routerHeartbeatService);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13560) Insufficient system resources exist to complete the requested service for some tests on Windows

2018-05-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478595#comment-16478595
 ] 

genericqa commented on HDFS-13560:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
38s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 34m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 34m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
40s{color} | {color:green} root: The patch generated 0 new + 120 unchanged - 1 
fixed = 120 total (was 121) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
8s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
39s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}109m  9s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}267m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
|  |  Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.util.Shell.getMemlockLimit(Long)  At Shell.java:then 
immediately reboxed in org.apache.hadoop.util.Shell.getMemlockLimit(Long)  At 
Shell.java:[line 1408] |
| Failed junit tests | 
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13560 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12923829/HDFS-13560.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  

<    1   2   3