[jira] [Commented] (HDDS-1597) Remove hdds-server-scm dependency from ozone-common

2019-06-02 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16854280#comment-16854280
 ] 

Hudson commented on HDDS-1597:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16651 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16651/])
Revert "HDDS-1597. Remove hdds-server-scm dependency from ozone-common. (elek: 
rev 2a97a37d9e313e509ac43fdafd379183fd564d9a)
* (add) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/exceptions/package-info.java
* (edit) hadoop-ozone/pom.xml
* (edit) hadoop-ozone/tools/pom.xml
* (edit) hadoop-ozone/common/pom.xml
* (delete) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestKeyManagerImpl.java
* (edit) hadoop-ozone/integration-test/pom.xml
* (add) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/exceptions/SCMException.java
* (add) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestKeyManagerImpl.java
* (delete) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/exceptions/package-info.java
* (edit) 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/ServerUtils.java
* (edit) hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OmUtils.java
* (delete) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/exceptions/SCMException.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/ScmUtils.java


> Remove hdds-server-scm dependency from ozone-common
> ---
>
> Key: HDDS-1597
> URL: https://issues.apache.org/jira/browse/HDDS-1597
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: ozone-dependency.png
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> I noticed that the hadoop-ozone/common project depends on 
> hadoop-hdds-server-scm project.
> The common projects are designed to be a shared artifacts between client and 
> server side. Adding additional dependency to the common pom means that the 
> dependency will be available for all the clients as well.
> (See the attached artifact about the current, desired structure).
> We definitely don't need scm server dependency on the client side.
> The code dependency is just one class (ScmUtils) and the shared code can be 
> easily moved to the common.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13739) Option to disable Rack Local Write Preference to avoid 2 issues - 1. Rack-by-Rack Maintenance leaves last data replica at risk, 2. avoid Major Storage Imbalance across

2019-06-02 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16854270#comment-16854270
 ] 

Hadoop QA commented on HDFS-13739:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 48s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
58s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
33s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
43s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
56s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 78m 44s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
54s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}212m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-13739 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12970629/HDFS-13739-01.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 470187174bfc 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed Feb 
13 15:00:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2210897 |
| maven | ver

[jira] [Commented] (HDFS-14508) RBF: Clean-up and refactor UI components

2019-06-02 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16854269#comment-16854269
 ] 

Hadoop QA commented on HDFS-14508:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
38s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 22s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 24s{color} 
| {color:red} hadoop-hdfs-project_hadoop-hdfs-rbf generated 11 new + 12 
unchanged - 0 fixed = 23 total (was 12) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 22m  1s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterFaultTolerant |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14508 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12970634/HDFS-14508-HDFS-13891.5.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 919e3e79c82e 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 0bcbdd6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26886/artifact/out/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26886/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26886/testRepo

[jira] [Updated] (HDDS-1629) Tar file creation can be optional for non-dist builds

2019-06-02 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-1629:
---
Status: Patch Available  (was: Open)

> Tar file creation can be optional for non-dist builds
> -
>
> Key: HDDS-1629
> URL: https://issues.apache.org/jira/browse/HDDS-1629
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Ozone tar file creation is a very time consuming step. I propose to make it 
> optional and create the tar file only if the dist profile is enabled (-Pdist)
> The tar file is not required to test ozone as the same content is available 
> from hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT which is enough to run 
> docker-compose pseudo clusters, smoketests. 
> If it's required, the tar file creation can be requested by the dist profile.
>  
> On my machine (ssd based) it can cause 5-10% time improvements as the tar 
> size is ~500MB and it requires a lot of IO.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1597) Remove hdds-server-scm dependency from ozone-common

2019-06-02 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1597?focusedWorklogId=252940&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-252940
 ]

ASF GitHub Bot logged work on HDDS-1597:


Author: ASF GitHub Bot
Created on: 03/Jun/19 06:36
Start Date: 03/Jun/19 06:36
Worklog Time Spent: 10m 
  Work Description: elek commented on issue #860: HDDS-1597. Remove 
hdds-server-scm dependency from ozone-common
URL: https://github.com/apache/hadoop/pull/860#issuecomment-498131007
 
 
   Thanks @bharatviswa504 the review and commit. I noticed a problem with 
ratis/THREE replication. I can't write any key and no error is visible on the 
console log.
   
   I tried to revert multiple commits one by one and this one is the most 
suspicious one. I will revert it for now, and retest it in a separated PR to be 
sure it's fine.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 252940)
Time Spent: 2h 50m  (was: 2h 40m)

> Remove hdds-server-scm dependency from ozone-common
> ---
>
> Key: HDDS-1597
> URL: https://issues.apache.org/jira/browse/HDDS-1597
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: ozone-dependency.png
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> I noticed that the hadoop-ozone/common project depends on 
> hadoop-hdds-server-scm project.
> The common projects are designed to be a shared artifacts between client and 
> server side. Adding additional dependency to the common pom means that the 
> dependency will be available for all the clients as well.
> (See the attached artifact about the current, desired structure).
> We definitely don't need scm server dependency on the client side.
> The code dependency is just one class (ScmUtils) and the shared code can be 
> easily moved to the common.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDDS-1597) Remove hdds-server-scm dependency from ozone-common

2019-06-02 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton reopened HDDS-1597:


> Remove hdds-server-scm dependency from ozone-common
> ---
>
> Key: HDDS-1597
> URL: https://issues.apache.org/jira/browse/HDDS-1597
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: ozone-dependency.png
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> I noticed that the hadoop-ozone/common project depends on 
> hadoop-hdds-server-scm project.
> The common projects are designed to be a shared artifacts between client and 
> server side. Adding additional dependency to the common pom means that the 
> dependency will be available for all the clients as well.
> (See the attached artifact about the current, desired structure).
> We definitely don't need scm server dependency on the client side.
> The code dependency is just one class (ScmUtils) and the shared code can be 
> easily moved to the common.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1629) Tar file creation can be optional for non-dist builds

2019-06-02 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1629?focusedWorklogId=252936&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-252936
 ]

ASF GitHub Bot logged work on HDDS-1629:


Author: ASF GitHub Bot
Created on: 03/Jun/19 06:33
Start Date: 03/Jun/19 06:33
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #887: HDDS-1629. Tar 
file creation can be optional for non-dist builds
URL: https://github.com/apache/hadoop/pull/887
 
 
   Ozone tar file creation is a very time consuming step. I propose to make it 
optional and create the tar file only if the dist profile is enabled (-Pdist)
   
   The tar file is not required to test ozone as the same content is available 
from hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT which is enough to run 
docker-compose pseudo clusters, smoketests. 
   
   If it's required, the tar file creation can be requested by the dist profile.

   On my machine (ssd based) it can cause 5-10% time improvements as the tar 
size is ~500MB and it requires a lot of IO.
   
   See: https://issues.apache.org/jira/browse/HDDS-1629
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 252936)
Time Spent: 10m
Remaining Estimate: 0h

> Tar file creation can be optional for non-dist builds
> -
>
> Key: HDDS-1629
> URL: https://issues.apache.org/jira/browse/HDDS-1629
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Ozone tar file creation is a very time consuming step. I propose to make it 
> optional and create the tar file only if the dist profile is enabled (-Pdist)
> The tar file is not required to test ozone as the same content is available 
> from hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT which is enough to run 
> docker-compose pseudo clusters, smoketests. 
> If it's required, the tar file creation can be requested by the dist profile.
>  
> On my machine (ssd based) it can cause 5-10% time improvements as the tar 
> size is ~500MB and it requires a lot of IO.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1629) Tar file creation can be optional for non-dist builds

2019-06-02 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-1629:
---
Description: 
Ozone tar file creation is a very time consuming step. I propose to make it 
optional and create the tar file only if the dist profile is enabled (-Pdist)

The tar file is not required to test ozone as the same content is available 
from hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT which is enough to run 
docker-compose pseudo clusters, smoketests. 

If it's required, the tar file creation can be requested by the dist profile.
 
On my machine (ssd based) it can cause 5-10% time improvements as the tar size 
is ~500MB and it requires a lot of IO.

  was:
Ozone tar file creation is a very time consuming step. I propose to make it 
optional and create the tar file only if the dist profile is enabled (-Pdist)

The tar file is not required to test ozone as the same content is available 
from hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT which is enough to run 
docker-compose pseudo clusters, smoketests. 

If it's required, the tar file creation can be requested by the dist profile.
 


> Tar file creation can be optional for non-dist builds
> -
>
> Key: HDDS-1629
> URL: https://issues.apache.org/jira/browse/HDDS-1629
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> Ozone tar file creation is a very time consuming step. I propose to make it 
> optional and create the tar file only if the dist profile is enabled (-Pdist)
> The tar file is not required to test ozone as the same content is available 
> from hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT which is enough to run 
> docker-compose pseudo clusters, smoketests. 
> If it's required, the tar file creation can be requested by the dist profile.
>  
> On my machine (ssd based) it can cause 5-10% time improvements as the tar 
> size is ~500MB and it requires a lot of IO.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1629) Tar file creation can be optional for non-dist builds

2019-06-02 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1629:
-
Labels: pull-request-available  (was: )

> Tar file creation can be optional for non-dist builds
> -
>
> Key: HDDS-1629
> URL: https://issues.apache.org/jira/browse/HDDS-1629
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>
> Ozone tar file creation is a very time consuming step. I propose to make it 
> optional and create the tar file only if the dist profile is enabled (-Pdist)
> The tar file is not required to test ozone as the same content is available 
> from hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT which is enough to run 
> docker-compose pseudo clusters, smoketests. 
> If it's required, the tar file creation can be requested by the dist profile.
>  
> On my machine (ssd based) it can cause 5-10% time improvements as the tar 
> size is ~500MB and it requires a lot of IO.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1629) Tar file creation can be optional for non-dist builds

2019-06-02 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-1629:
---
Summary: Tar file creation can be optional for non-dist builds  (was: Tar 
file creation can be option for non-dist builds)

> Tar file creation can be optional for non-dist builds
> -
>
> Key: HDDS-1629
> URL: https://issues.apache.org/jira/browse/HDDS-1629
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> Ozone tar file creation is a very time consuming step. I propose to make it 
> optional and create the tar file only if the dist profile is enabled (-Pdist)
> The tar file is not required to test ozone as the same content is available 
> from hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT which is enough to run 
> docker-compose pseudo clusters, smoketests. 
> If it's required, the tar file creation can be requested by the dist profile.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1629) Tar file creation can be option for non-dist builds

2019-06-02 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-1629:
--

 Summary: Tar file creation can be option for non-dist builds
 Key: HDDS-1629
 URL: https://issues.apache.org/jira/browse/HDDS-1629
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Elek, Marton
Assignee: Elek, Marton


Ozone tar file creation is a very time consuming step. I propose to make it 
optional and create the tar file only if the dist profile is enabled (-Pdist)

The tar file is not required to test ozone as the same content is available 
from hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT which is enough to run 
docker-compose pseudo clusters, smoketests. 

If it's required, the tar file creation can be requested by the dist profile.
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14525) JspHelper ignores hadoop.http.authentication.type

2019-06-02 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph resolved HDFS-14525.
--
Resolution: Not A Problem

> JspHelper ignores hadoop.http.authentication.type
> -
>
> Key: HDFS-14525
> URL: https://issues.apache.org/jira/browse/HDFS-14525
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Priority: Major
>
> On Secure Cluster With hadoop.http.authentication.type simple and 
> hadoop.http.authentication.anonymous.allowed is true, WebHdfs Rest Api fails 
> when user.name is not set. It runs fine if user.name=ambari-qa is set..
> {code}
> [knox@pjosephdocker-1 ~]$ curl -sS -L -w '%{http_code}' -X GET -d '' -H 
> 'Content-Length: 0' --negotiate -u : 
> 'http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/services/sync/yarn-ats?op=GETFILESTATUS'
> {"RemoteException":{"exception":"SecurityException","javaClassName":"java.lang.SecurityException","message":"Failed
>  to obtain user group information: java.io.IOException: Security enabled but 
> user not authenticated by filter"}}403[knox@pjosephdocker-1 ~]$ 
> {code}
> JspHelper#getUGI checks UserGroupInformation.isSecurityEnabled() instead of 
> conf.get(hadoop.http.authentication.type).equals("kerberos") to check if Http 
> is Secure causing the issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1628) Fix the execution and retur code of smoketest executor shell script

2019-06-02 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-1628:
---
Status: Patch Available  (was: Open)

> Fix the execution and retur code of smoketest executor shell script
> ---
>
> Key: HDDS-1628
> URL: https://issues.apache.org/jira/browse/HDDS-1628
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>
> Problem: Some of the smoketest executions were reported to green even if they 
> contained failed tests.
> Root cause: the legacy test executor 
> (hadoop-ozone/dist/src/main/smoketest/test.sh) which just calls the new 
> executor script (hadoop-ozone/dist/src/main/compose/test-all.sh) didn't 
> handle the return code well (the failure of the smoketests should be 
> signalled by the bash return code)
> This patch:
>  * Fixes the error code handling in smoketest/test.sh
>  * Fixes the test execution in compose/test-all.sh (should work from any 
> other directories)
>  * Updates hadoop-ozone/dev-support/checks/acceptance.sh to use the newer 
> test-all.sh executor instead of the old one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1628) Fix the execution and retur code of smoketest executor shell script

2019-06-02 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton reassigned HDDS-1628:
--

Assignee: Elek, Marton

> Fix the execution and retur code of smoketest executor shell script
> ---
>
> Key: HDDS-1628
> URL: https://issues.apache.org/jira/browse/HDDS-1628
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>
> Problem: Some of the smoketest executions were reported to green even if they 
> contained failed tests.
> Root cause: the legacy test executor 
> (hadoop-ozone/dist/src/main/smoketest/test.sh) which just calls the new 
> executor script (hadoop-ozone/dist/src/main/compose/test-all.sh) didn't 
> handle the return code well (the failure of the smoketests should be 
> signalled by the bash return code)
> This patch:
>  * Fixes the error code handling in smoketest/test.sh
>  * Fixes the test execution in compose/test-all.sh (should work from any 
> other directories)
>  * Updates hadoop-ozone/dev-support/checks/acceptance.sh to use the newer 
> test-all.sh executor instead of the old one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1628) Fix the execution and retur code of smoketest executor shell script

2019-06-02 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-1628:
--

 Summary: Fix the execution and retur code of smoketest executor 
shell script
 Key: HDDS-1628
 URL: https://issues.apache.org/jira/browse/HDDS-1628
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Elek, Marton


Problem: Some of the smoketest executions were reported to green even if they 
contained failed tests.

Root cause: the legacy test executor 
(hadoop-ozone/dist/src/main/smoketest/test.sh) which just calls the new 
executor script (hadoop-ozone/dist/src/main/compose/test-all.sh) didn't handle 
the return code well (the failure of the smoketests should be signalled by the 
bash return code)

This patch:
 * Fixes the error code handling in smoketest/test.sh
 * Fixes the test execution in compose/test-all.sh (should work from any other 
directories)
 * Updates hadoop-ozone/dev-support/checks/acceptance.sh to use the newer 
test-all.sh executor instead of the old one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1627) Make the version of the used hadoop-runner configurable

2019-06-02 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-1627:
---
Status: Patch Available  (was: Open)

> Make the version of the used hadoop-runner configurable
> ---
>
> Key: HDDS-1627
> URL: https://issues.apache.org/jira/browse/HDDS-1627
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> During an offline discussion with [~arp] and [~eyang] we agreed that it could 
> be more safe to fix the tag of the used hadoop-runner images during the 
> releases.
> It also requires fix tags from hadoop-runner, but after that it's possible to 
> use the fixed tags.
> This patch makes it possible to define the required version/tag in pom.xml
>  1. the default hadoop-runner.version is added to all .env files  during the 
> build
>  2. If a variable is added to the .env, it can be used from docker-compose 
> files AND can be overridden by environment variables (it makes it possible to 
> define custom version during a local run) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1627) Make the version of the used hadoop-runner configurable

2019-06-02 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton reassigned HDDS-1627:
--

Assignee: Elek, Marton

> Make the version of the used hadoop-runner configurable
> ---
>
> Key: HDDS-1627
> URL: https://issues.apache.org/jira/browse/HDDS-1627
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> During an offline discussion with [~arp] and [~eyang] we agreed that it could 
> be more safe to fix the tag of the used hadoop-runner images during the 
> releases.
> It also requires fix tags from hadoop-runner, but after that it's possible to 
> use the fixed tags.
> This patch makes it possible to define the required version/tag in pom.xml
>  1. the default hadoop-runner.version is added to all .env files  during the 
> build
>  2. If a variable is added to the .env, it can be used from docker-compose 
> files AND can be overridden by environment variables (it makes it possible to 
> define custom version during a local run) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1627) Make the version of the used hadoop-runner configurable

2019-06-02 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1627?focusedWorklogId=252919&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-252919
 ]

ASF GitHub Bot logged work on HDDS-1627:


Author: ASF GitHub Bot
Created on: 03/Jun/19 05:56
Start Date: 03/Jun/19 05:56
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #886: HDDS-1627. Make 
the version of the used hadoop-runner configurable
URL: https://github.com/apache/hadoop/pull/886
 
 
   During an offline discussion with [~arp] and [~eyang] we agreed that it 
could be more safe to fix the tag of the used hadoop-runner images during the 
releases.
   
   It also requires fix tags from hadoop-runner, but after that it's possible 
to use the fixed tags.
   
   This patch makes it possible to define the required version/tag in pom.xml
   
1. the default hadoop-runner.version is added to all .env files  during the 
build
2. If a variable is added to the .env, it can be used from docker-compose 
files AND can be overridden by environment variables (it makes it possible to 
define custom version during a local run) 
   
   See: https://issues.apache.org/jira/browse/HDDS-1627
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 252919)
Time Spent: 10m
Remaining Estimate: 0h

> Make the version of the used hadoop-runner configurable
> ---
>
> Key: HDDS-1627
> URL: https://issues.apache.org/jira/browse/HDDS-1627
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> During an offline discussion with [~arp] and [~eyang] we agreed that it could 
> be more safe to fix the tag of the used hadoop-runner images during the 
> releases.
> It also requires fix tags from hadoop-runner, but after that it's possible to 
> use the fixed tags.
> This patch makes it possible to define the required version/tag in pom.xml
>  1. the default hadoop-runner.version is added to all .env files  during the 
> build
>  2. If a variable is added to the .env, it can be used from docker-compose 
> files AND can be overridden by environment variables (it makes it possible to 
> define custom version during a local run) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1627) Make the version of the used hadoop-runner configurable

2019-06-02 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1627:
-
Labels: pull-request-available  (was: )

> Make the version of the used hadoop-runner configurable
> ---
>
> Key: HDDS-1627
> URL: https://issues.apache.org/jira/browse/HDDS-1627
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>
> During an offline discussion with [~arp] and [~eyang] we agreed that it could 
> be more safe to fix the tag of the used hadoop-runner images during the 
> releases.
> It also requires fix tags from hadoop-runner, but after that it's possible to 
> use the fixed tags.
> This patch makes it possible to define the required version/tag in pom.xml
>  1. the default hadoop-runner.version is added to all .env files  during the 
> build
>  2. If a variable is added to the .env, it can be used from docker-compose 
> files AND can be overridden by environment variables (it makes it possible to 
> define custom version during a local run) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1627) Make the version of the used hadoop-runner configurable

2019-06-02 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-1627:
--

 Summary: Make the version of the used hadoop-runner configurable
 Key: HDDS-1627
 URL: https://issues.apache.org/jira/browse/HDDS-1627
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Elek, Marton


During an offline discussion with [~arp] and [~eyang] we agreed that it could 
be more safe to fix the tag of the used hadoop-runner images during the 
releases.

It also requires fix tags from hadoop-runner, but after that it's possible to 
use the fixed tags.

This patch makes it possible to define the required version/tag in pom.xml

 1. the default hadoop-runner.version is added to all .env files  during the 
build
 2. If a variable is added to the .env, it can be used from docker-compose 
files AND can be overridden by environment variables (it makes it possible to 
define custom version during a local run) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14508) RBF: Clean-up and refactor UI components

2019-06-02 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16854229#comment-16854229
 ] 

Takanobu Asanuma commented on HDFS-14508:
-

Thanks for the review. Uploaded the 5th patch.

* rename {{RouterCoreMBean}} to {{RouterMBean}}

> RBF: Clean-up and refactor UI components
> 
>
> Key: HDFS-14508
> URL: https://issues.apache.org/jira/browse/HDFS-14508
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: Takanobu Asanuma
>Priority: Minor
> Attachments: HDFS-14508-HDFS-13891.1.patch, 
> HDFS-14508-HDFS-13891.2.patch, HDFS-14508-HDFS-13891.3.patch, 
> HDFS-14508-HDFS-13891.4.patch, HDFS-14508-HDFS-13891.5.patch
>
>
> Router UI has tags that are not used or incorrectly set. The code should be 
> cleaned-up.
> One such example is 
> Path : 
> (\hadoop-hdfs-project\hadoop-hdfs-rbf\src\main\webapps\router\federationhealth.js)
> {code:java}
> {"name": "routerstat", "url": 
> "/jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus"},{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14508) RBF: Clean-up and refactor UI components

2019-06-02 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-14508:

Attachment: HDFS-14508-HDFS-13891.5.patch

> RBF: Clean-up and refactor UI components
> 
>
> Key: HDFS-14508
> URL: https://issues.apache.org/jira/browse/HDFS-14508
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: Takanobu Asanuma
>Priority: Minor
> Attachments: HDFS-14508-HDFS-13891.1.patch, 
> HDFS-14508-HDFS-13891.2.patch, HDFS-14508-HDFS-13891.3.patch, 
> HDFS-14508-HDFS-13891.4.patch, HDFS-14508-HDFS-13891.5.patch
>
>
> Router UI has tags that are not used or incorrectly set. The code should be 
> cleaned-up.
> One such example is 
> Path : 
> (\hadoop-hdfs-project\hadoop-hdfs-rbf\src\main\webapps\router\federationhealth.js)
> {code:java}
> {"name": "routerstat", "url": 
> "/jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus"},{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1600) Add userName and IPAddress as part of OMRequest.

2019-06-02 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1600?focusedWorklogId=252912&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-252912
 ]

ASF GitHub Bot logged work on HDDS-1600:


Author: ASF GitHub Bot
Created on: 03/Jun/19 05:21
Start Date: 03/Jun/19 05:21
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on issue #857: HDDS-1600. Add 
userName and IPAddress as part of OMRequest.
URL: https://github.com/apache/hadoop/pull/857#issuecomment-498116567
 
 
   @bharatviswa504 thanks for the patch. On a second thought i wonder why don't 
we complete authorization on the OM which receives the first request from 
client, this will save us the trouble of propagating credentials in rest of the 
call and simplify HA design.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 252912)
Time Spent: 2.5h  (was: 2h 20m)

> Add userName and IPAddress as part of OMRequest.
> 
>
> Key: HDDS-1600
> URL: https://issues.apache.org/jira/browse/HDDS-1600
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> In OM HA, the actual execution of request happens under GRPC context, so UGI 
> object which we retrieve from ProtobufRpcEngine.Server.getRemoteUser(); will 
> not be available.
> In similar manner ProtobufRpcEngine.Server.getRemoteIp().
>  
> So, during preExecute(which happens under RPC context) extract userName and 
> IPAddress and add it to the OMRequest, and then send the request to ratis 
> server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14530) libhdfspp is missing functions like hdfsWrite

2019-06-02 Thread Krishna Kishore (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16854198#comment-16854198
 ] 

Krishna Kishore commented on HDFS-14530:


_Hi Wei-Chiu,_

 

   _Thanks for the reply. I have downloaded Apache Hadoop 3.2.0 and couldn't 
find libhdfspp there.  There are the only_ _libraries there._

 

[kishore@kkpx11 hadoop-3.2.0]$ find . -name *.so*

./lib/native1/libhadoop.so

./lib/native1/libhadoop.so.1.0.0

./lib/native1/libnativetask.so

./lib/native1/libnativetask.so.1.0.0

 

This is why I wanted to build it from the source  available at  
[https://github.com/apache/hadoop] . When I built it I see that hdfsWrite() and 
some other functions are not available there libhdfspp.so

  It would be good if the library is available in the 3.2.0 release bundle so 
that we don't have to build it ourselves. Or, if that is not possible yet we 
have to at least resolve this gap for write functionality. 

Thanks,

Kishore

> libhdfspp is missing functions like hdfsWrite
> -
>
> Key: HDFS-14530
> URL: https://issues.apache.org/jira/browse/HDFS-14530
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs++
>Affects Versions: 3.1.0, 2.8.5
>Reporter: Krishna Kishore
>Priority: Major
> Fix For: 3.1.0, 2.8.5
>
>
> I have downloaded code from [https://github.com/apache/hadoop] and compiled 
> libhdfspp. I couldn't find how to use this library, libhdfspp.so . I see that 
> some functions like hdfsWrite(), hdfsHSync() are missing from this library.
> This library is unusable without the write functionality. Please let me know 
> if there is a way  to use the write functionality with current hdfs++ code or 
> if there are plans for adding this functionality.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1626) Optimize allocateBlock for cases when excludeList is provided

2019-06-02 Thread Lokesh Jain (JIRA)
Lokesh Jain created HDDS-1626:
-

 Summary: Optimize allocateBlock for cases when excludeList is 
provided
 Key: HDDS-1626
 URL: https://issues.apache.org/jira/browse/HDDS-1626
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Lokesh Jain
Assignee: Lokesh Jain


This Jira aims to optimize allocateBlock for cases when excludeList is 
provided. This includes the case when excludeList is empty or the cases when it 
is not empty.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13739) Option to disable Rack Local Write Preference to avoid 2 issues - 1. Rack-by-Rack Maintenance leaves last data replica at risk, 2. avoid Major Storage Imbalance across Da

2019-06-02 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-13739:

Assignee: Ayush Saxena
  Status: Patch Available  (was: Open)

> Option to disable Rack Local Write Preference to avoid 2 issues - 1. 
> Rack-by-Rack Maintenance leaves last data replica at risk, 2. avoid Major 
> Storage Imbalance across DataNodes caused by uneven spread of Datanodes 
> across Racks
> ---
>
> Key: HDFS-13739
> URL: https://issues.apache.org/jira/browse/HDFS-13739
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer & mover, block placement, datanode, fs, 
> hdfs, hdfs-client, namenode, nn, performance
>Affects Versions: 2.7.3
> Environment: Hortonworks HDP 2.6
>Reporter: Hari Sekhon
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-13739-01.patch
>
>
> Request to be able to disable Rack Local Write preference / Write All 
> Replicas to different Racks.
> Current HDFS write pattern of "local node, rack local node, other rack node" 
> is good for most purposes but there are at least 2 scenarios where this is 
> not ideal:
>  # Rack-by-Rack Maintenance leaves data at risk of losing last remaining 
> replica. If a single datanode failed it would likely cause some data outage 
> or even data loss if the rack is lost or an upgrade fails (or perhaps it's a 
> rack rebuild). Setting replicas to 4 would reduce write performance and waste 
> storage which is currently the only workaround to that issue.
>  # Major Storage Imbalance across datanodes when there is an uneven layout of 
> datanodes across racks - some nodes fill up while others are half empty.
> I have observed this storage imbalance on a cluster where half the nodes were 
> 85% full and the other half were only 50% full.
> Rack layouts like the following illustrate this - the nodes in the same rack 
> will only choose to send half their block replicas to each other, so they 
> will fill up first, while other nodes will receive far fewer replica blocks:
> {code:java}
> NumNodes - Rack 
> 2 - rack 1
> 2 - rack 2
> 1 - rack 3
> 1 - rack 4 
> 1 - rack 5
> 1 - rack 6{code}
> In this case if I reduce the number of replicas to 2 then I get an almost 
> perfect spread of blocks across all datanodes because HDFS has no choice but 
> to maintain the only 2nd replica on a different rack. If I increase the 
> replicas back to 3 it goes back to 85% on half the nodes and 50% on the other 
> half, because the extra replicas choose to replicate only to rack local nodes.
> Why not just run the HDFS balancer to fix it you might say? This is a heavily 
> loaded HBase cluster - aside from destroying HBase's data locality and 
> performance by moving blocks out from underneath RegionServers - as soon as 
> an HBase major compaction occurs (at least weekly), all blocks will get 
> re-written by HBase and the HDFS client will again write to local node, rack 
> local node, other rack node - resulting in the same storage imbalance again. 
> Hence this cannot be solved by running HDFS balancer on HBase clusters - or 
> for any application sitting on top of HDFS that has any HDFS block churn.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13739) Option to disable Rack Local Write Preference to avoid 2 issues - 1. Rack-by-Rack Maintenance leaves last data replica at risk, 2. avoid Major Storage Imbalance across

2019-06-02 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16854179#comment-16854179
 ] 

Ayush Saxena commented on HDFS-13739:
-

Makes sense to have, Have uploaded patch for the same.

> Option to disable Rack Local Write Preference to avoid 2 issues - 1. 
> Rack-by-Rack Maintenance leaves last data replica at risk, 2. avoid Major 
> Storage Imbalance across DataNodes caused by uneven spread of Datanodes 
> across Racks
> ---
>
> Key: HDFS-13739
> URL: https://issues.apache.org/jira/browse/HDFS-13739
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer & mover, block placement, datanode, fs, 
> hdfs, hdfs-client, namenode, nn, performance
>Affects Versions: 2.7.3
> Environment: Hortonworks HDP 2.6
>Reporter: Hari Sekhon
>Priority: Major
> Attachments: HDFS-13739-01.patch
>
>
> Request to be able to disable Rack Local Write preference / Write All 
> Replicas to different Racks.
> Current HDFS write pattern of "local node, rack local node, other rack node" 
> is good for most purposes but there are at least 2 scenarios where this is 
> not ideal:
>  # Rack-by-Rack Maintenance leaves data at risk of losing last remaining 
> replica. If a single datanode failed it would likely cause some data outage 
> or even data loss if the rack is lost or an upgrade fails (or perhaps it's a 
> rack rebuild). Setting replicas to 4 would reduce write performance and waste 
> storage which is currently the only workaround to that issue.
>  # Major Storage Imbalance across datanodes when there is an uneven layout of 
> datanodes across racks - some nodes fill up while others are half empty.
> I have observed this storage imbalance on a cluster where half the nodes were 
> 85% full and the other half were only 50% full.
> Rack layouts like the following illustrate this - the nodes in the same rack 
> will only choose to send half their block replicas to each other, so they 
> will fill up first, while other nodes will receive far fewer replica blocks:
> {code:java}
> NumNodes - Rack 
> 2 - rack 1
> 2 - rack 2
> 1 - rack 3
> 1 - rack 4 
> 1 - rack 5
> 1 - rack 6{code}
> In this case if I reduce the number of replicas to 2 then I get an almost 
> perfect spread of blocks across all datanodes because HDFS has no choice but 
> to maintain the only 2nd replica on a different rack. If I increase the 
> replicas back to 3 it goes back to 85% on half the nodes and 50% on the other 
> half, because the extra replicas choose to replicate only to rack local nodes.
> Why not just run the HDFS balancer to fix it you might say? This is a heavily 
> loaded HBase cluster - aside from destroying HBase's data locality and 
> performance by moving blocks out from underneath RegionServers - as soon as 
> an HBase major compaction occurs (at least weekly), all blocks will get 
> re-written by HBase and the HDFS client will again write to local node, rack 
> local node, other rack node - resulting in the same storage imbalance again. 
> Hence this cannot be solved by running HDFS balancer on HBase clusters - or 
> for any application sitting on top of HDFS that has any HDFS block churn.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13739) Option to disable Rack Local Write Preference to avoid 2 issues - 1. Rack-by-Rack Maintenance leaves last data replica at risk, 2. avoid Major Storage Imbalance across Da

2019-06-02 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-13739:

Attachment: HDFS-13739-01.patch

> Option to disable Rack Local Write Preference to avoid 2 issues - 1. 
> Rack-by-Rack Maintenance leaves last data replica at risk, 2. avoid Major 
> Storage Imbalance across DataNodes caused by uneven spread of Datanodes 
> across Racks
> ---
>
> Key: HDFS-13739
> URL: https://issues.apache.org/jira/browse/HDFS-13739
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer & mover, block placement, datanode, fs, 
> hdfs, hdfs-client, namenode, nn, performance
>Affects Versions: 2.7.3
> Environment: Hortonworks HDP 2.6
>Reporter: Hari Sekhon
>Priority: Major
> Attachments: HDFS-13739-01.patch
>
>
> Request to be able to disable Rack Local Write preference / Write All 
> Replicas to different Racks.
> Current HDFS write pattern of "local node, rack local node, other rack node" 
> is good for most purposes but there are at least 2 scenarios where this is 
> not ideal:
>  # Rack-by-Rack Maintenance leaves data at risk of losing last remaining 
> replica. If a single datanode failed it would likely cause some data outage 
> or even data loss if the rack is lost or an upgrade fails (or perhaps it's a 
> rack rebuild). Setting replicas to 4 would reduce write performance and waste 
> storage which is currently the only workaround to that issue.
>  # Major Storage Imbalance across datanodes when there is an uneven layout of 
> datanodes across racks - some nodes fill up while others are half empty.
> I have observed this storage imbalance on a cluster where half the nodes were 
> 85% full and the other half were only 50% full.
> Rack layouts like the following illustrate this - the nodes in the same rack 
> will only choose to send half their block replicas to each other, so they 
> will fill up first, while other nodes will receive far fewer replica blocks:
> {code:java}
> NumNodes - Rack 
> 2 - rack 1
> 2 - rack 2
> 1 - rack 3
> 1 - rack 4 
> 1 - rack 5
> 1 - rack 6{code}
> In this case if I reduce the number of replicas to 2 then I get an almost 
> perfect spread of blocks across all datanodes because HDFS has no choice but 
> to maintain the only 2nd replica on a different rack. If I increase the 
> replicas back to 3 it goes back to 85% on half the nodes and 50% on the other 
> half, because the extra replicas choose to replicate only to rack local nodes.
> Why not just run the HDFS balancer to fix it you might say? This is a heavily 
> loaded HBase cluster - aside from destroying HBase's data locality and 
> performance by moving blocks out from underneath RegionServers - as soon as 
> an HBase major compaction occurs (at least weekly), all blocks will get 
> re-written by HBase and the HDFS client will again write to local node, rack 
> local node, other rack node - resulting in the same storage imbalance again. 
> Hence this cannot be solved by running HDFS balancer on HBase clusters - or 
> for any application sitting on top of HDFS that has any HDFS block churn.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14220) Enable Replica Placement Value Per Rack

2019-06-02 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14220:

Attachment: HDFS-14220-02.patch

> Enable Replica Placement Value Per Rack
> ---
>
> Key: HDFS-14220
> URL: https://issues.apache.org/jira/browse/HDFS-14220
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Amithsha
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14220-01.patch, HDFS-14220-02.patch
>
>
> By default, Replica placement per rack will be taken care by  
> BlockPlacementPolicyDefault.java .
> With 2 if conditions 
>  # numOfRacks <1 
>  # numOfRacks > 1
> and the placement will happen as 1 on localrack, 2 on remote rack.
> If a user needs max of 1 replica per rack then 
> BlockPlacementPolicyDefault.java modification is needed instead we can add a 
> property to specify the placement policy and replica value per rack.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14220) Enable Replica Placement Value Per Rack

2019-06-02 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16854116#comment-16854116
 ] 

Hadoop QA commented on HDFS-14220:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 39s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 4 new + 486 unchanged - 0 fixed = 490 total (was 486) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 50s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m 53s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}140m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14220 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12970614/HDFS-14220-01.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 6daea1ae4642 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2210897 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26883/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26883/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/

[jira] [Commented] (HDFS-14358) Provide LiveNode and DeadNode filter in DataNode UI

2019-06-02 Thread Brahma Reddy Battula (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16854091#comment-16854091
 ] 

Brahma Reddy Battula commented on HDFS-14358:
-

[~Sushma_28] thanks for reporting and [~hemanthboyina] thanks for patch.

Having two dropdown boxes looks odd,how about unify the existing dropdown box..?

> Provide LiveNode and DeadNode filter in DataNode UI
> ---
>
> Key: HDFS-14358
> URL: https://issues.apache.org/jira/browse/HDFS-14358
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.2
>Reporter: Ravuri Sushma sree
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14358.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13891) HDFS RBF stabilization phase I

2019-06-02 Thread Brahma Reddy Battula (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16854088#comment-16854088
 ] 

Brahma Reddy Battula commented on HDFS-13891:
-

{quote}I'm German ;) - I'm fine with "über"
{quote}
Oh,ok..

RBF(Router Based Federation) is one of the HDFS Feature. So,to keep in short we 
are using RBF for all the jira's which you can sub tasks also. Hopefully now 
title and description should be ok.

> HDFS RBF stabilization phase I  
> 
>
> Key: HDFS-13891
> URL: https://issues.apache.org/jira/browse/HDFS-13891
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Priority: Major
>  Labels: RBF
>
> RBF(Router Based Federation) shipped in 3.0+ and 2.9..
> now its out various corner cases, scale and error handling issues are 
> surfacing.
> And we are targeting security feaiure (HDFS-13532) also.
> this umbrella to fix all those issues and support missing 
> protocols(HDFS-13655) before next 3.3 release.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13891) HDFS RBF stabilization phase I

2019-06-02 Thread Brahma Reddy Battula (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-13891:

Description: 
RBF(Router Based Federation) shipped in 3.0+ and 2.9..

now its out various corner cases, scale and error handling issues are surfacing.

And we are targeting security feaiure (HDFS-13532) also.

this umbrella to fix all those issues and support missing protocols(HDFS-13655) 
before next 3.3 release.

  was:
RBF shipped in 3.0+ and 2.9..

now its out various corner cases, scale and error handling issues are surfacing.

And we are targeting security feaiure (HDFS-13532) also.

this umbrella to fix all those issues and support missing protocols(HDFS-13655) 
before next 3.3 release.


> HDFS RBF stabilization phase I  
> 
>
> Key: HDFS-13891
> URL: https://issues.apache.org/jira/browse/HDFS-13891
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Priority: Major
>  Labels: RBF
>
> RBF(Router Based Federation) shipped in 3.0+ and 2.9..
> now its out various corner cases, scale and error handling issues are 
> surfacing.
> And we are targeting security feaiure (HDFS-13532) also.
> this umbrella to fix all those issues and support missing 
> protocols(HDFS-13655) before next 3.3 release.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13891) HDFS RBF stabilization phase I

2019-06-02 Thread Lars Francke (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16854084#comment-16854084
 ] 

Lars Francke commented on HDFS-13891:
-

I'm German ;) - I'm fine with "über"

I meant "RBF" though.

> HDFS RBF stabilization phase I  
> 
>
> Key: HDFS-13891
> URL: https://issues.apache.org/jira/browse/HDFS-13891
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Priority: Major
>  Labels: RBF
>
> RBF shipped in 3.0+ and 2.9..
> now its out various corner cases, scale and error handling issues are 
> surfacing.
> And we are targeting security feaiure (HDFS-13532) also.
> this umbrella to fix all those issues and support missing 
> protocols(HDFS-13655) before next 3.3 release.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14358) Provide LiveNode and DeadNode filter in DataNode UI

2019-06-02 Thread Brahma Reddy Battula (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14358:

Issue Type: Improvement  (was: Wish)

> Provide LiveNode and DeadNode filter in DataNode UI
> ---
>
> Key: HDFS-14358
> URL: https://issues.apache.org/jira/browse/HDFS-14358
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.2
>Reporter: Ravuri Sushma sree
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14358.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13596) NN restart fails after RollingUpgrade from 2.x to 3.x

2019-06-02 Thread Brahma Reddy Battula (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16854078#comment-16854078
 ] 

Brahma Reddy Battula commented on HDFS-13596:
-

[~jojochuang] do you've any comments on final patch..?

> NN restart fails after RollingUpgrade from 2.x to 3.x
> -
>
> Key: HDFS-13596
> URL: https://issues.apache.org/jira/browse/HDFS-13596
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Fei Hui
>Priority: Critical
> Attachments: HDFS-13596.001.patch, HDFS-13596.002.patch, 
> HDFS-13596.003.patch, HDFS-13596.004.patch, HDFS-13596.005.patch, 
> HDFS-13596.006.patch, HDFS-13596.007.patch
>
>
> After rollingUpgrade NN from 2.x and 3.x, if the NN is restarted, it fails 
> while replaying edit logs.
>  * After NN is started with rollingUpgrade, the layoutVersion written to 
> editLogs (before finalizing the upgrade) is the pre-upgrade layout version 
> (so as to support downgrade).
>  * When writing transactions to log, NN writes as per the current layout 
> version. In 3.x, erasureCoding bits are added to the editLog transactions.
>  * So any edit log written after the upgrade and before finalizing the 
> upgrade will have the old layout version but the new format of transactions.
>  * When NN is restarted and the edit logs are replayed, the NN reads the old 
> layout version from the editLog file. When parsing the transactions, it 
> assumes that the transactions are also from the previous layout and hence 
> skips parsing the erasureCoding bits.
>  * This cascades into reading the wrong set of bits for other fields and 
> leads to NN shutting down.
> Sample error output:
> {code:java}
> java.lang.IllegalArgumentException: Invalid clientId - length is 0 expected 
> length 16
>  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:74)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:86)
>  at 
> org.apache.hadoop.ipc.RetryCache$CacheEntryWithPayload.(RetryCache.java:163)
>  at 
> org.apache.hadoop.ipc.RetryCache.addCacheEntryWithPayload(RetryCache.java:322)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheEntryWithPayload(FSNamesystem.java:960)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:397)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:249)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:910)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
> 2018-05-17 19:10:06,522 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception 
> loading fsimage
> java.io.IOException: java.lang.IllegalStateException: Cannot skip to less 
> than the current value (=16389), where newValue=16388
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.resetLastInodeId(FSDirectory.java:1945)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:298)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:69

[jira] [Updated] (HDFS-13891) HDFS RBF stabilization phase I

2019-06-02 Thread Brahma Reddy Battula (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-13891:

Summary: HDFS RBF stabilization phase I(was: Über-jira: RBF 
stabilisation phase I  )

> HDFS RBF stabilization phase I  
> 
>
> Key: HDFS-13891
> URL: https://issues.apache.org/jira/browse/HDFS-13891
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Priority: Major
>  Labels: RBF
>
> RBF shipped in 3.0+ and 2.9..
> now its out various corner cases, scale and error handling issues are 
> surfacing.
> And we are targeting security feaiure (HDFS-13532) also.
> this umbrella to fix all those issues and support missing 
> protocols(HDFS-13655) before next 3.3 release.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14528) [SBN Read]Failover from Active to Standby Failed

2019-06-02 Thread Brahma Reddy Battula (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16854073#comment-16854073
 ] 

Brahma Reddy Battula commented on HDFS-14528:
-

[~Sushma_28] thanks for reporting the issue and uploading the patch.

Yes, we need to skip obersever node while failover. Fix looks fine to me.

 [~xkrogen] and [~csun] , would also take a look on to this..?

 

Minior Nits:

   Please fix the check-style issue

 Try to add one UT for this.

Go through following for patch naming convention.

[https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute#HowToContribute-Namingyourpatch]

 

> [SBN Read]Failover from Active to Standby Failed  
> --
>
> Key: HDFS-14528
> URL: https://issues.apache.org/jira/browse/HDFS-14528
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha
>Reporter: Ravuri Sushma sree
>Assignee: Ravuri Sushma sree
>Priority: Major
> Attachments: ZKFC_issue.patch
>
>
> *Started an HA Cluster with three nodes [ _Active ,Standby ,Observer_ ]*
> *When trying to exectue the failover command from active to standby* 
> *._/hdfs haadmin  -failover nn1 nn2, below Exception is thrown_*
>   Operation failed: Call From X-X-X-X/X-X-X-X to Y-Y-Y-Y: failed on 
> connection exception: java.net.ConnectException: Connection refused; For more 
> details see: [http://wiki.apache.org/hadoop/ConnectionRefused]
>  at sun.reflect.GeneratedConstructorAccessor7.newInstance(Unknown Source)
>  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>  at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
>  at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:755) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13891) Über-jira: RBF stabilisation phase I

2019-06-02 Thread Brahma Reddy Battula (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16854065#comment-16854065
 ] 

Brahma Reddy Battula commented on HDFS-13891:
-

[~larsfrancke] thanks for your comment.

"über" means "about" (it's german word)which was used some Hadoop Umbrella 
jira's .e.g HADOOP-13204,HADOOP-11694.. So I used same way.

Sure, I can rename the jira.

> Über-jira: RBF stabilisation phase I  
> --
>
> Key: HDFS-13891
> URL: https://issues.apache.org/jira/browse/HDFS-13891
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Priority: Major
>  Labels: RBF
>
> RBF shipped in 3.0+ and 2.9..
> now its out various corner cases, scale and error handling issues are 
> surfacing.
> And we are targeting security feaiure (HDFS-13532) also.
> this umbrella to fix all those issues and support missing 
> protocols(HDFS-13655) before next 3.3 release.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13891) Über-jira: RBF stabilisation phase I

2019-06-02 Thread Lars Francke (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16854051#comment-16854051
 ] 

Lars Francke commented on HDFS-13891:
-

I don't just want to do it myself but it'd be great if you could rename the 
issue to avoid an acronym that's not even explained in the ticket itself for 
people coming to this from the outside.

> Über-jira: RBF stabilisation phase I  
> --
>
> Key: HDFS-13891
> URL: https://issues.apache.org/jira/browse/HDFS-13891
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Priority: Major
>  Labels: RBF
>
> RBF shipped in 3.0+ and 2.9..
> now its out various corner cases, scale and error handling issues are 
> surfacing.
> And we are targeting security feaiure (HDFS-13532) also.
> this umbrella to fix all those issues and support missing 
> protocols(HDFS-13655) before next 3.3 release.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14220) Enable Replica Placement Value Per Rack

2019-06-02 Thread Amithsha (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16853990#comment-16853990
 ] 

Amithsha commented on HDFS-14220:
-

[~ayushtkn] Thanks for the patch.

> Enable Replica Placement Value Per Rack
> ---
>
> Key: HDFS-14220
> URL: https://issues.apache.org/jira/browse/HDFS-14220
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Amithsha
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14220-01.patch
>
>
> By default, Replica placement per rack will be taken care by  
> BlockPlacementPolicyDefault.java .
> With 2 if conditions 
>  # numOfRacks <1 
>  # numOfRacks > 1
> and the placement will happen as 1 on localrack, 2 on remote rack.
> If a user needs max of 1 replica per rack then 
> BlockPlacementPolicyDefault.java modification is needed instead we can add a 
> property to specify the placement policy and replica value per rack.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14220) Enable Replica Placement Value Per Rack

2019-06-02 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16853985#comment-16853985
 ] 

Ayush Saxena commented on HDFS-14220:
-

[~Amithsha] thanx for helping up with the scenario,
Have uploaded a patch for the same.

> Enable Replica Placement Value Per Rack
> ---
>
> Key: HDFS-14220
> URL: https://issues.apache.org/jira/browse/HDFS-14220
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Amithsha
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14220-01.patch
>
>
> By default, Replica placement per rack will be taken care by  
> BlockPlacementPolicyDefault.java .
> With 2 if conditions 
>  # numOfRacks <1 
>  # numOfRacks > 1
> and the placement will happen as 1 on localrack, 2 on remote rack.
> If a user needs max of 1 replica per rack then 
> BlockPlacementPolicyDefault.java modification is needed instead we can add a 
> property to specify the placement policy and replica value per rack.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14220) Enable Replica Placement Value Per Rack

2019-06-02 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14220:

Status: Patch Available  (was: Open)

> Enable Replica Placement Value Per Rack
> ---
>
> Key: HDFS-14220
> URL: https://issues.apache.org/jira/browse/HDFS-14220
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Amithsha
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14220-01.patch
>
>
> By default, Replica placement per rack will be taken care by  
> BlockPlacementPolicyDefault.java .
> With 2 if conditions 
>  # numOfRacks <1 
>  # numOfRacks > 1
> and the placement will happen as 1 on localrack, 2 on remote rack.
> If a user needs max of 1 replica per rack then 
> BlockPlacementPolicyDefault.java modification is needed instead we can add a 
> property to specify the placement policy and replica value per rack.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14220) Enable Replica Placement Value Per Rack

2019-06-02 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14220:

Attachment: HDFS-14220-01.patch

> Enable Replica Placement Value Per Rack
> ---
>
> Key: HDFS-14220
> URL: https://issues.apache.org/jira/browse/HDFS-14220
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Amithsha
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14220-01.patch
>
>
> By default, Replica placement per rack will be taken care by  
> BlockPlacementPolicyDefault.java .
> With 2 if conditions 
>  # numOfRacks <1 
>  # numOfRacks > 1
> and the placement will happen as 1 on localrack, 2 on remote rack.
> If a user needs max of 1 replica per rack then 
> BlockPlacementPolicyDefault.java modification is needed instead we can add a 
> property to specify the placement policy and replica value per rack.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14220) Enable Replica Placement Value Per Rack

2019-06-02 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena reassigned HDFS-14220:
---

  Assignee: Ayush Saxena
  Priority: Major  (was: Trivial)
Issue Type: Improvement  (was: New Feature)

> Enable Replica Placement Value Per Rack
> ---
>
> Key: HDFS-14220
> URL: https://issues.apache.org/jira/browse/HDFS-14220
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Amithsha
>Assignee: Ayush Saxena
>Priority: Major
>
> By default, Replica placement per rack will be taken care by  
> BlockPlacementPolicyDefault.java .
> With 2 if conditions 
>  # numOfRacks <1 
>  # numOfRacks > 1
> and the placement will happen as 1 on localrack, 2 on remote rack.
> If a user needs max of 1 replica per rack then 
> BlockPlacementPolicyDefault.java modification is needed instead we can add a 
> property to specify the placement policy and replica value per rack.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org